repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
tflearn/tflearn
|
tensorflow
| 798
|
WARNING:tensorflow:Error encountered when serializing layer_variables/seq2seq_model. Type is unsupported, or the types of the items don't match field type in CollectionDef. 'list' object has no attribute 'name'
|
Can someones please help me with this error? I do not know how to fix it. I am using TF 1.0.1 and tflearn 0.3.1. It is not making any sense to me. Please let me know if you have worked around or have fixed it.
|
open
|
2017-06-15T23:06:07Z
|
2017-07-21T11:00:09Z
|
https://github.com/tflearn/tflearn/issues/798
|
[] |
rahulraju93
| 2
|
amdegroot/ssd.pytorch
|
computer-vision
| 161
|
Error with train custom dataset
|
I want to learn using the dataset I have. I have solved some problems, but the below error can not be solved.
The ground truth file in my dataset consists of (x, y, w, h, class_label) in *.txt file.
loss_c = loss_c.view(num, -1)
loss_c[pos] = 0 # filter out pos boxes for now
Is there a problem here?

this is source in multibox_loss.py

If you give the option --cuda = False for confirmation, the following error occurs.
File "/home/jhryu/Downloads/ssd.pytorch/layers/modules/multibox_loss.py", line 103, in forward
loss_c = log_sum_exp (batch_conf) - batch_conf.gather (1, conf_t.view (-1, 1))
RuntimeError: Invalid index in gather at /opt/conda/conda-bld/pytorch_1524586445097/work/aten/src/TH/generic/THTensorMath.c:600
Thank you for your reply.
|
closed
|
2018-05-06T06:07:06Z
|
2020-05-07T18:36:02Z
|
https://github.com/amdegroot/ssd.pytorch/issues/161
|
[] |
RyuJunHwan
| 10
|
PokeAPI/pokeapi
|
api
| 605
|
sinistea a. polteageist antique
|
Hey,
why is there no sprite for the two antique forms? Its the same sprite as the normal one?
Greetz,
Luzifer
|
open
|
2021-03-24T22:58:46Z
|
2021-03-24T22:59:57Z
|
https://github.com/PokeAPI/pokeapi/issues/605
|
[] |
LuziferSenpai
| 0
|
marimo-team/marimo
|
data-visualization
| 3,815
|
Marimo --headless doesn't give the link with access token
|
### Describe the bug
I've run `marimo edit file.py --headless --host 0.0.0.0` on a LAN OpenSUSE Tumbleweed machine, and open the link, the web page asked me for Access Token/Password. What's this access token/password? Does it have a default value?

I tried my user password, root password, all gaves the `'NoneType' object has no attribute 'is_authenticated'` error infomation:

The terminal printed errors:
<details>
```
ERROR: Exception in ASGI application
+ Exception Group Traceback (most recent call last):
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/_utils.py", line 76, in collapse_excgroups
| yield
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/middleware/base.py", line 178, in __call__
| async with anyio.create_task_group() as task_group:
| ^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/anyio/_backends/_asyncio.py", line 767, in __aexit__
| raise BaseExceptionGroup(
| ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
+-+---------------- 1 ----------------
| Traceback (most recent call last):
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/uvicorn/protocols/http/h11_impl.py", line 403, in run_asgi
| result = await app( # type: ignore[func-returns-value]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__
| return await self.app(scope, receive, send)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/applications.py", line 112, in __call__
| await self.middleware_stack(scope, receive, send)
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/middleware/errors.py", line 187, in __call__
| raise exc
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/middleware/errors.py", line 165, in __call__
| await self.app(scope, receive, _send)
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/marimo/_server/api/auth.py", line 218, in __call__
| return await super().__call__(scope, receive, send)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/middleware/sessions.py", line 85, in __call__
| await self.app(scope, receive, send_wrapper)
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/middleware/base.py", line 177, in __call__
| with recv_stream, send_stream, collapse_excgroups():
| ^^^^^^^^^^^^^^^^^^^^
| File "/usr/lib64/python3.12/contextlib.py", line 158, in __exit__
| self.gen.throw(value)
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/_utils.py", line 82, in collapse_excgroups
| raise exc
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/middleware/base.py", line 179, in __call__
| response = await self.dispatch_func(request, call_next)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/marimo/_server/api/middleware.py", line 146, in dispatch
| return await call_next(request)
| ^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/middleware/base.py", line 154, in call_next
| raise app_exc
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/middleware/base.py", line 141, in coro
| await self.app(scope, receive_or_disconnect, send_no_error)
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/marimo/_server/api/auth.py", line 248, in __call__
| await super().__call__(scope, receive, send)
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/middleware/authentication.py", line 48, in __call__
| await self.app(scope, receive, send)
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/marimo/_server/api/auth.py", line 237, in wrapped_app
| await app(scope, receive, send)
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/middleware/cors.py", line 93, in __call__
| await self.simple_response(scope, receive, send, request_headers=headers)
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/middleware/cors.py", line 144, in simple_response
| await self.app(scope, receive, send)
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/marimo/_server/api/middleware.py", line 101, in __call__
| return await self.app(scope, receive, send)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/marimo/_server/api/middleware.py", line 337, in __call__
| await self.app(scope, receive, send)
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/marimo/_server/api/middleware.py", line 337, in __call__
| await self.app(scope, receive, send)
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
| await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
| raise exc
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
| await app(scope, receive, sender)
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/routing.py", line 715, in __call__
| await self.middleware_stack(scope, receive, send)
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/routing.py", line 735, in app
| await route.handle(scope, receive, send)
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/routing.py", line 460, in handle
| await self.app(scope, receive, send)
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/routing.py", line 715, in __call__
| await self.middleware_stack(scope, receive, send)
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/routing.py", line 735, in app
| await route.handle(scope, receive, send)
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/routing.py", line 288, in handle
| await self.app(scope, receive, send)
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/routing.py", line 76, in app
| await wrap_app_handling_exceptions(app, request)(scope, receive, send)
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
| raise exc
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
| await app(scope, receive, sender)
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/routing.py", line 73, in app
| response = await f(request)
| ^^^^^^^^^^^^^^^^
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/marimo/_server/router.py", line 54, in wrapper_func
| response = await func(request=request)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/marimo/_server/api/endpoints/login.py", line 108, in login_submit
| if request.user.is_authenticated:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| AttributeError: 'NoneType' object has no attribute 'is_authenticated'
+------------------------------------
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/uvicorn/protocols/http/h11_impl.py", line 403, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/applications.py", line 112, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/middleware/errors.py", line 187, in __call__
raise exc
File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/middleware/errors.py", line 165, in __call__
await self.app(scope, receive, _send)
File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/marimo/_server/api/auth.py", line 218, in __call__
return await super().__call__(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/middleware/sessions.py", line 85, in __call__
await self.app(scope, receive, send_wrapper)
File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/middleware/base.py", line 177, in __call__
with recv_stream, send_stream, collapse_excgroups():
^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.12/contextlib.py", line 158, in __exit__
self.gen.throw(value)
File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/_utils.py", line 82, in collapse_excgroups
raise exc
File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/middleware/base.py", line 179, in __call__
response = await self.dispatch_func(request, call_next)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/marimo/_server/api/middleware.py", line 146, in dispatch
return await call_next(request)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/middleware/base.py", line 154, in call_next
raise app_exc
File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/middleware/base.py", line 141, in coro
await self.app(scope, receive_or_disconnect, send_no_error)
File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/marimo/_server/api/auth.py", line 248, in __call__
await super().__call__(scope, receive, send)
File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/middleware/authentication.py", line 48, in __call__
await self.app(scope, receive, send)
File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/marimo/_server/api/auth.py", line 237, in wrapped_app
await app(scope, receive, send)
File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/middleware/cors.py", line 93, in __call__
await self.simple_response(scope, receive, send, request_headers=headers)
File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/middleware/cors.py", line 144, in simple_response
await self.app(scope, receive, send)
File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/marimo/_server/api/middleware.py", line 101, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/marimo/_server/api/middleware.py", line 337, in __call__
await self.app(scope, receive, send)
File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/marimo/_server/api/middleware.py", line 337, in __call__
await self.app(scope, receive, send)
File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/routing.py", line 715, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/routing.py", line 735, in app
await route.handle(scope, receive, send)
File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/routing.py", line 460, in handle
await self.app(scope, receive, send)
File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/routing.py", line 715, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/routing.py", line 735, in app
await route.handle(scope, receive, send)
File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/routing.py", line 288, in handle
await self.app(scope, receive, send)
File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/routing.py", line 76, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/starlette/routing.py", line 73, in app
response = await f(request)
^^^^^^^^^^^^^^^^
File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/marimo/_server/router.py", line 54, in wrapper_func
response = await func(request=request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rico/pylearning/.venv/lib64/python3.12/site-packages/marimo/_server/api/endpoints/login.py", line 108, in login_submit
if request.user.is_authenticated:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'is_authenticated'
```
</details>
Then I found #1757 , the first picture in that issue, `marimo tutorial intro --headless` gives a link with access token. And I tried this, it still printed no access token link and the page ask me for access token/password:

After that, I cheked marimo docs and found `-- token-password` command option, and I tried, it occurred the same error:

Finally, I found `--no-token` command option, and this worked right, because it really does no need access token/password, just go straight to the marimo editable page.
So I'm confused. Does `marimo --headless --host 0.0.0.0` must gives the link with access token or it has a default value? Am I misunderstand the usage of `--token-password`?
### Environment
<details>
```
{
"marimo": "0.11.5",
"OS": "OpenSUSE Tumbleweed",
"OS Version": "6.13.1-1-default",
"Processor": "x86_64",
"Python Version": "3.12.9",
"Binaries": {
"Browser": "--",
"Node": "--"
},
"Dependencies": {
"click": "8.1.8",
"docutils": "0.21.2",
"itsdangerous": "2.2.0",
"jedi": "0.19.2",
"markdown": "3.7",
"narwhals": "1.27.0",
"packaging": "24.2",
"psutil": "7.0.0",
"pygments": "2.19.1",
"pymdown-extensions": "10.14.3",
"pyyaml": "6.0.2",
"ruff": "0.9.6",
"starlette": "0.45.3",
"tomlkit": "0.13.2",
"typing-extensions": "4.12.2",
"uvicorn": "0.34.0",
"websockets": "15.0"
},
"Optional Dependencies": {},
"Experimental Flags": {}
}
```
</details>
### Code to reproduce
_No response_
|
closed
|
2025-02-17T15:12:58Z
|
2025-02-17T19:56:57Z
|
https://github.com/marimo-team/marimo/issues/3815
|
[
"bug"
] |
justrico
| 5
|
JoeanAmier/TikTokDownloader
|
api
| 271
|
下载特定视频意外出错退出
|
下载该主播发布视频,每次到这个视频就会报错推出,重复下载也是到这里就结束了。不知道什么情况,下面是报错内容
网络异常:
正在进行第 1 次重试
【视频】2024-08-02 18.54.29-视频-一一丫丫-还记得6年前的那两个小丫头吗?是的,她们已经长大了 #一一丫丫 #抖音潮流舞蹈地图
#不怕不怕舞蹈 下载中断,错误信息:
2024-08-02 18.54.29-视频-一一丫丫-还记得6年前的那两个小丫头吗?是的,她们已经长大了 #一一丫丫 #抖音潮流舞蹈地图
#不怕不怕舞蹈.mp4 文件已删除
Traceback (most recent call last):
File "main.py", line 7, in main
File "src\application\TikTokDownloader.py", line 327, in run
File "src\application\TikTokDownloader.py", line 212, in main_menu
File "src\application\TikTokDownloader.py", line 295, in compatible
File "src\application\TikTokDownloader.py", line 220, in complete
File "src\application\main_complete.py", line 1522, in run
File "src\application\main_complete.py", line 263, in account_acquisition_interactive
File "src\application\main_complete.py", line 281, in __account_secondary_menu
File "src\application\main_complete.py", line 898, in __multiple_choice
File "src\application\main_complete.py", line 351, in account_detail_inquire
File "src\application\main_complete.py", line 389, in __account_detail_handle
File "src\application\main_complete.py", line 460, in deal_account_detail
File "src\application\main_complete.py", line 524, in _batch_process_detail
File "src\application\main_complete.py", line 571, in download_account_detail
File "src\downloader\download.py", line 123, in run
File "src\downloader\download.py", line 151, in run_batch
File "src\downloader\download.py", line 275, in batch_processing
File "src\downloader\download.py", line 296, in downloader_chart
File "src\tools\retry.py", line 17, in inner
File "src\downloader\download.py", line 471, in request_file
File "httpx_models.py", line 761, in raise_for_status
httpx.HTTPStatusError: Client error '424 Failed Dependency' for url 'http://v3-web.douyinvod.com/5be71a3ba9b1f7e14b213ad124654a9f/66b0687c/video/tos/cn/tos-cn-ve-15/8ec38077027646609161efd2e83d1078/?a=6383&ch=10010&cr=3&dr=0&lr=all&cd=0%7C0%7C0%7C3&cv=1&br=2004&bt=2004&cs=0&ds=3&ft=khyHAB1UiiuG1_rMCdOC~49Zyo3nOz7_9r7bpMyC~_7gaXQ2B22_TWDMDoMUmbd.o~&mime_type=video_mp4&qs=0&rc=Z2c0PDxmOWQ2Njs0PDQ5N0BpM2Rod3lmc2l4dDMzOmkzM0BjLl80LzFiNi8xMzAvLWE1YSNoYmhpL2cubm9fLS1fLS9zcw%3D%3D&btag=c0000e00010000&cquery=100B_102v_102u_100o_101r&dy_q=1722826301&l=20240805105141EE30592C39E298B5E03F'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/424
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "main.py", line 11, in
File "asyncio\runners.py", line 194, in run
File "asyncio\runners.py", line 118, in run
File "asyncio\base_events.py", line 687, in run_until_complete
File "main.py", line 6, in main
File "src\application\TikTokDownloader.py", line 102, in aexit
File "src\application\TikTokDownloader.py", line 352, in close
File "src\application\TikTokDownloader.py", line 330, in delete_cache
File "shutil.py", line 781, in rmtree
File "shutil.py", line 635, in _rmtree_unsafe
File "shutil.py", line 633, in _rmtree_unsafe
PermissionError: [WinError 32] 另一个程序正在使用此文件,进程无法访问。: 'D:\Users\110\Desktop\TikTokDownloader_V5.4_WIN\_internal\cache\2020-04-22 21.02.00-视频-一一丫丫-有两个漏风的棉袄是种什么体验。#坑爹 @抖音小助手.mp4'
[1252] Failed to execute script 'main' due to unhandled exception!
|
open
|
2024-08-06T01:58:42Z
|
2024-08-06T01:58:42Z
|
https://github.com/JoeanAmier/TikTokDownloader/issues/271
|
[] |
foxyuaaalll
| 0
|
plotly/dash
|
data-visualization
| 2,428
|
[BUG] search_value triggers unrelated callbacks, making dropdown search unusable
|
While chasing down a bug in explainerdashboard, I noticed that when you have a dynamic server side dropdown search, and also have a callback targeting dropdown.value, then this latter gets triggered everytime you search in the dropdown. Usually with unwanted side effects. **In this case it makes the dropdown search completely unusable.**
Below an example, with a dropdown, and a select random item button. Searching in the dropdown triggers the button callback. Including a callback.context check to see if the button callback was indeed triggered by the button n_clicks input seems to fix the problem.
```python
import random
import dash
from dash import Dash, dcc, html, dcc, Input, Output, State
from dash.exceptions import PreventUpdate
options = [str(i) for i in range(1000)]
app = Dash(__name__)
app.layout = html.Div([
html.Div(id='dropdown-output'),
html.Button("Random Button", id='button'),
dcc.Checklist(options=['fix bug'], value=[], id='fix-bug'),
dcc.Dropdown(id="index-dropdown", value="654")
])
@app.callback(
Output("index-dropdown", "value"),
Input("button", "n_clicks"),
State('fix-bug', 'value'),
)
def random_index(n_clicks, fix_bug):
print("triggered random click!")
if fix_bug and dash.callback_context.triggered_id != 'button':
raise PreventUpdate
return random.choice(options)
@app.callback(
Output("index-dropdown", "options"),
Input("index-dropdown", "search_value")
)
def update_options(search_value):
if not search_value:
raise PreventUpdate
return [o for o in options if str(search_value) in o]
@app.callback(
Output("dropdown-output", "children"),
Input("index-dropdown", "value")
)
def send_output(index):
return f"selected {index}!"
app.run_server(port=8051, debug=False)
```
|
closed
|
2023-02-17T12:20:44Z
|
2023-07-06T15:05:12Z
|
https://github.com/plotly/dash/issues/2428
|
[] |
oegedijk
| 7
|
Johnserf-Seed/TikTokDownload
|
api
| 55
|
运行时报错,链接正确,不清楚如何解决
|
Win10系统,确认链接没有问题,直接闪退
|
closed
|
2021-09-13T17:12:34Z
|
2021-09-14T14:37:32Z
|
https://github.com/Johnserf-Seed/TikTokDownload/issues/55
|
[
"额外求助(help wanted)"
] |
tfeather4
| 3
|
PaddlePaddle/models
|
computer-vision
| 5,115
|
Optimizer file [/home/weidawang/.paddle/weights/BMN.pdopt] not exits
|
I got error when i am getting started with BMN:
```
(base) weidawang@weidawang-TUF-Gaming-FX506LU-FX506LU:~/Repo/PaddlePaddle/models/PaddleCV/video$ bash run.sh predict BMN ./configs/bmn.yaml
predict BMN ./configs/bmn.yaml
DALI is not installed, you can improve performance if use DALI
[INFO: predict.py: 199]: Namespace(batch_size=1, config='./configs/bmn.yaml', filelist=None, infer_topk=20, log_interval=1, model_name='BMN', save_dir='data/predict_results', use_gpu=True, video_path='', weights=None)
[INFO: config_utils.py: 70]: ---------------- Infer Arguments ----------------
[INFO: config_utils.py: 72]: MODEL:
[INFO: config_utils.py: 74]: name:BMN
[INFO: config_utils.py: 74]: tscale:100
[INFO: config_utils.py: 74]: dscale:100
[INFO: config_utils.py: 74]: feat_dim:400
[INFO: config_utils.py: 74]: prop_boundary_ratio:0.5
[INFO: config_utils.py: 74]: num_sample:32
[INFO: config_utils.py: 74]: num_sample_perbin:3
[INFO: config_utils.py: 74]: anno_file:data/dataset/bmn/activitynet_1.3_annotations.json
[INFO: config_utils.py: 74]: feat_path:/media/weidawang/DATA/dataset/ActionLocalization/bmn_feat
[INFO: config_utils.py: 72]: TRAIN:
[INFO: config_utils.py: 74]: subset:train
[INFO: config_utils.py: 74]: epoch:9
[INFO: config_utils.py: 74]: batch_size:16
[INFO: config_utils.py: 74]: num_threads:8
[INFO: config_utils.py: 74]: use_gpu:True
[INFO: config_utils.py: 74]: num_gpus:4
[INFO: config_utils.py: 74]: learning_rate:0.001
[INFO: config_utils.py: 74]: learning_rate_decay:0.1
[INFO: config_utils.py: 74]: lr_decay_iter:4200
[INFO: config_utils.py: 74]: l2_weight_decay:0.0001
[INFO: config_utils.py: 72]: VALID:
[INFO: config_utils.py: 74]: subset:validation
[INFO: config_utils.py: 74]: batch_size:16
[INFO: config_utils.py: 74]: num_threads:8
[INFO: config_utils.py: 74]: use_gpu:True
[INFO: config_utils.py: 74]: num_gpus:4
[INFO: config_utils.py: 72]: TEST:
[INFO: config_utils.py: 74]: subset:validation
[INFO: config_utils.py: 74]: batch_size:1
[INFO: config_utils.py: 74]: num_threads:1
[INFO: config_utils.py: 74]: snms_alpha:0.001
[INFO: config_utils.py: 74]: snms_t1:0.5
[INFO: config_utils.py: 74]: snms_t2:0.9
[INFO: config_utils.py: 74]: output_path:data/output/EVAL/BMN_results
[INFO: config_utils.py: 74]: result_path:data/evaluate_results
[INFO: config_utils.py: 72]: INFER:
[INFO: config_utils.py: 74]: subset:test
[INFO: config_utils.py: 74]: batch_size:1
[INFO: config_utils.py: 74]: num_threads:1
[INFO: config_utils.py: 74]: snms_alpha:0.4
[INFO: config_utils.py: 74]: snms_t1:0.5
[INFO: config_utils.py: 74]: snms_t2:0.9
[INFO: config_utils.py: 74]: filelist:data/dataset/bmn/infer.list
[INFO: config_utils.py: 74]: output_path:data/output/INFER/BMN_results
[INFO: config_utils.py: 74]: result_path:data/predict_results
[INFO: config_utils.py: 75]: -------------------------------------------------
W1218 16:29:50.778240 31472 device_context.cc:338] Please NOTE: device: 0, CUDA Capability: 75, Driver API Version: 11.1, Runtime API Version: 10.2
W1218 16:29:50.779249 31472 device_context.cc:346] device: 0, cuDNN Version: 8.0.
test subset video numbers: 5
Traceback (most recent call last):
File "predict.py", line 201, in <module>
infer(args)
File "predict.py", line 132, in infer
fluid.default_main_program(), place)
File "/home/weidawang/Repo/PaddlePaddle/models/PaddleCV/video/models/model.py", line 158, in load_test_weights
fluid.load(prog, weights, executor=exe, var_list=params_list)
File "<decorator-gen-76>", line 2, in load
File "/home/weidawang/miniconda3/lib/python3.7/site-packages/paddle/fluid/wrapped_decorator.py", line 25, in __impl__
return wrapped_func(*args, **kwargs)
File "/home/weidawang/miniconda3/lib/python3.7/site-packages/paddle/fluid/framework.py", line 215, in __impl__
return func(*args, **kwargs)
File "/home/weidawang/miniconda3/lib/python3.7/site-packages/paddle/fluid/io.py", line 1882, in load
"Optimizer file [{}] not exits".format(opt_file_name)
AssertionError: Optimizer file [/home/weidawang/.paddle/weights/BMN.pdopt] not exits
```
|
closed
|
2020-12-18T08:37:09Z
|
2020-12-22T05:34:56Z
|
https://github.com/PaddlePaddle/models/issues/5115
|
[] |
wwdok
| 11
|
gradio-app/gradio
|
data-visualization
| 10,027
|
I want to specify the resolution of the Webcam
|
- [x] I have searched to see if a similar issue already exists.
**Is your feature request related to a problem? Please describe.**
Currently the resolution is hardcoded.
gradio/js/image/shared/stream_utils.ts
```gradio/js/image/shared/stream_utils.ts
export async function get_video_stream(
include_audio: boolean,
video_source: HTMLVideoElement,
device_id?: string
): Promise<MediaStream> {
const size = {
width: { ideal: 1920 },
height: { ideal: 1440 }
};
```
**Describe the solution you'd like**
It would be great if you could specify the resolution.
```
gradio.Image(label="Input", sources="webcam",webcam_height=720,webcam_width=1280)
```
|
closed
|
2024-11-23T17:15:06Z
|
2024-12-04T00:01:48Z
|
https://github.com/gradio-app/gradio/issues/10027
|
[
"enhancement"
] |
Azunyan1111
| 1
|
WZMIAOMIAO/deep-learning-for-image-processing
|
deep-learning
| 420
|
yolov3 正负样本的选择
|
大佬您好,请问yolov3的代码中,正负样本是根据什么进行分配的,我在build_targets中只看到了iou>0.2的作为正样本,那负样本怎么确定的呢?
|
closed
|
2021-12-07T12:09:26Z
|
2021-12-10T04:56:34Z
|
https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/420
|
[] |
Dandelion111
| 1
|
PokemonGoF/PokemonGo-Bot
|
automation
| 5,900
|
API v0.55.0 - API check fail
|
Ran a fresh install of the bot (master branch, commit 7c2722d7) on a brand new VM (ubuntu 16.4.0 LTS) after trying Pogarek's for a while, bought a brand new hash key (and according to buddyauth it's gone from "UNUSED / INACTIVE" to "ACTIVE" so that bit is OK), tried to follow @pogarek's adivce in #5882 and ran
`pip2 install --upgrade -r requirements.txt`
But am still getting the API check fail, is this expected behaviour, is there something in the bot that will need to be updated to work with 0.55.0?
|
closed
|
2017-02-06T11:12:22Z
|
2017-02-19T09:47:11Z
|
https://github.com/PokemonGoF/PokemonGo-Bot/issues/5900
|
[] |
camnomis
| 5
|
awesto/django-shop
|
django
| 149
|
multiple address models break unique related_name constraint
|
I've ran into this bug while trying to run tests (http://stackoverflow.com/questions/10058277/django-test-fails-when-creating-test-database/) and a user of one of my plugins has ran into it when running my migrations: https://github.com/Lacrymology/django-shop-area-tax/issues/1
The issue seems to be the existence of two models with `ForeignKey(User, related_name="[shipping|billing]_address"..)`, i.e.: your own user-defined models and `shop.addressmodel.Address`.
Even if you don't add `shop.addressmodel` to your user models, the tests import them to use them, and then conflict arises, in the case of the bug the other user is reporting, I'm not sure where the import's coming from.
But it has lead me to think that my own app will have the same problem, if a user wants to use the l10n app, but wants to provide their own Address model. I think I'll have to move it out of my app's models.py file, that'd do the trick, wouldn't it?
|
closed
|
2012-04-18T16:26:54Z
|
2016-02-02T14:05:31Z
|
https://github.com/awesto/django-shop/issues/149
|
[] |
Lacrymology
| 3
|
miguelgrinberg/Flask-SocketIO
|
flask
| 740
|
Connection will be auto-closed after some hours
|
Hi,Miguel
if just keep idle for some while, probably some hours, the connection will be closed,the log is shown below, broker_heartbeat of celery is already set to 0,but it seems to take no effect
ws Connected sid is 0x 021099aa317247d38c003e2d6d8a601c
<Request 'http://localhost:85/socket.io/?EIO=3&transport=polling&t=MINy8tW' [GET]>
some hours later....
Connection error while reading from queue
Traceback (most recent call last):
File "c:\Python34\lib\site-packages\socketio\kombu_manager.py", line 98, in _listen
message = queue.get(block=True)
File "c:\Python34\lib\site-packages\kombu\simple.py", line 53, in get
timeout=timeout and remaining)
File "c:\Python34\lib\site-packages\kombu\connection.py", line 288, in drain_events
return self.transport.drain_events(self.connection, **kwargs)
File "c:\Python34\lib\site-packages\kombu\transport\pyamqp.py", line 95, in drain_events
return connection.drain_events(**kwargs)
File "c:\Python34\lib\site-packages\amqp\connection.py", line 303, in drain_events
chanmap, None, timeout=timeout,
File "c:\Python34\lib\site-packages\amqp\connection.py", line 366, in _wait_multiple
channel, method_sig, args, content = read_timeout(timeout)
File "c:\Python34\lib\site-packages\amqp\connection.py", line 330, in read_timeout
return self.method_reader.read_method()
File "c:\Python34\lib\site-packages\amqp\method_framing.py", line 189, in read_method
raise m
File "c:\Python34\lib\site-packages\amqp\method_framing.py", line 107, in _next_method
frame_type, channel, payload = read_frame()
File "c:\Python34\lib\site-packages\amqp\transport.py", line 154, in read_frame
frame_header = read(7, True)
File "c:\Python34\lib\site-packages\amqp\transport.py", line 283, in _read
raise IOError('Socket closed')
OSError: Socket closed
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\Python34\lib\site-packages\socketio\kombu_manager.py", line 100, in _listen
yield message.payload
File "c:\Python34\lib\site-packages\kombu\simple.py", line 30, in __exit__
self.close()
File "c:\Python34\lib\site-packages\kombu\simple.py", line 82, in close
self.consumer.cancel()
File "c:\Python34\lib\site-packages\kombu\messaging.py", line 461, in cancel
cancel(tag)
File "c:\Python34\lib\site-packages\amqp\channel.py", line 1631, in basic_cancel
self._send_method((60, 30), args)
File "c:\Python34\lib\site-packages\amqp\abstract_channel.py", line 56, in _send_method
self.channel_id, method_sig, args, content,
File "c:\Python34\lib\site-packages\amqp\method_framing.py", line 221, in write_method
write_frame(1, channel, payload)
File "c:\Python34\lib\site-packages\amqp\transport.py", line 182, in write_frame
frame_type, channel, size, payload, 0xce,
File "c:\Python34\lib\site-packages\eventlet\greenio\base.py", line 400, in sendall
tail = self.send(data, flags)
File "c:\Python34\lib\site-packages\eventlet\greenio\base.py", line 394, in send
return self._send_loop(self.fd.send, data, flags)
File "c:\Python34\lib\site-packages\eventlet\greenio\base.py", line 381, in _send_loop
return send_method(data, *args)
ConnectionResetError:
|
closed
|
2018-07-14T09:40:38Z
|
2018-09-29T09:37:39Z
|
https://github.com/miguelgrinberg/Flask-SocketIO/issues/740
|
[
"question"
] |
GitKKg
| 3
|
microsoft/unilm
|
nlp
| 874
|
beitv2 512 pretrain models
|
Thanks for releasing Beitv2 code and pretrain models, it's really an exciting work.
I have a question that whether you have the plan to release beitv2_512 pretrain models. If not, how can I finetune beitv2_224 pretrain models with 512×512 images?
|
closed
|
2022-09-19T13:40:00Z
|
2022-09-28T12:41:06Z
|
https://github.com/microsoft/unilm/issues/874
|
[] |
UnpureRationalist
| 1
|
2noise/ChatTTS
|
python
| 632
|
安卓TTS
|
用来读书小说用,计划开发安卓版嘛?
|
closed
|
2024-07-26T08:18:21Z
|
2024-08-01T16:53:44Z
|
https://github.com/2noise/ChatTTS/issues/632
|
[
"duplicate"
] |
lbbboy
| 2
|
aio-libs/aiomysql
|
asyncio
| 278
|
how to use ResultProxy with Sqlalchemy ORM
|
When i got a ResultProxy object, how to convert to a Sqlalchemy ORM object?
|
closed
|
2018-04-04T10:27:15Z
|
2018-04-04T14:17:36Z
|
https://github.com/aio-libs/aiomysql/issues/278
|
[] |
wuyazi
| 1
|
biolab/orange3
|
data-visualization
| 6,739
|
A new Melt widget icon
|
**What's your use case?**
As of today, the melt widget icon is a shopping cart. This is misleading and not easy to understand.
**What's your proposed solution?**
A new icon, pretty much explicit.

https://github.com/simonaubertbd/misc_icons/blob/main/melt.svg
**Are there any alternative solutions?**
N/A
|
open
|
2024-02-17T07:01:40Z
|
2024-03-11T11:43:22Z
|
https://github.com/biolab/orange3/issues/6739
|
[] |
simonaubertbd
| 5
|
httpie/cli
|
python
| 882
|
Documentation example showing how to post raw JSON to some route
|
It's not clear from the website's documentation, or the `--help` output, how to do the following equivalent curl task:
**Post a raw JSON query to ElasticSearch:**
```
curl \
--header "Content-Type: application/json" \
--request POST \
--data '{ "_source": [ "restricted_countries.*" ], "query": { "match_all": {} }, "size": 1000 }' \
'http://localhost:9200/_search'
```
The examples seem to show how to post form values and other stuff.
It would be great if there was a way to do this, and if it was documented in all places, such that searching for "httpie post json" would show the answer in the search result context of Google's top hits.
After a quick read of the documentation, I'm not even sure if it's possible, but I didn't dig that deep.
|
closed
|
2020-03-06T17:27:15Z
|
2025-03-04T03:17:31Z
|
https://github.com/httpie/cli/issues/882
|
[
"docs"
] |
nhooey
| 2
|
DistrictDataLabs/yellowbrick
|
scikit-learn
| 892
|
matplotlib pip testing package to eliminate Freetype font differences across environments
|
@ndanielsen TLDR: Based on the validated success of the testing version of matplotlib on conda in eliminating Freetype differences on CI builds, it seems worthwhile to ask the matplotlib team if there is the possibility of a pip version being released for testing as well (at least for Windows). The number of affected tests would go well beyond the 31 xfailing on Appveyor generally and Linux conda here: I took tests from the test_classifier folder where there were tols>0.1 and removed them. From this, 14 tests were failing. This is only a single directory: there could be well over 100 tests where Freetype differences could be eliminated across environments.
- Windows conda: Freetype differences are eliminated on CI builds and locally with mpl testing package
- Windows pip: If matplotlib team could provide testing package, have the potential to eliminate diffs
- Linux conda: Local and Travis testing works on all images
### Background
The majority of xfails on Windows (both regular Python and Miniconda) can be removed following the removal of x/y labels and legends, but there is other text that remains
While the baseline images were updated, the xfails that were in place were not removed, because I first wanted to analyze which images would still fail and what overlap there was with Linux, which is what led to the 31 xfailing images listed below.
The finding that most Windows xfails can be removed is based on the analysis here (the builds were on the "windiffs" branch):
https://github.com/DistrictDataLabs/yellowbrick/pull/823#issue-272241944
### Image comparison differences based on less common text elements
The number of tests that are associated with any particular text element are currently relatively small (3-10). Therefore, having a more scalable approach to remove or otherwise reconcile differences in text across OS platforms and/or Python distributions (PyPI/Anaconda) and that will work with future visualizers is more desirable than dealing with such differences for each kind of text.
A common version of Freetype may provide a way to deal with this, if it can be used easily on CI builds and in local testing environments. There is a conda package for testing matplotlib that aims to get rid of Freetype differences across environments.,
### LOCAL Windows conda and Linux conda environments using the testing version of matplotlib
All 31 tests that have been xfailing with the regular version of `matplotlib` PASSED with the `testing` version.
(The testing version of matplotlib has a specific version of Freetype.)
### CI Appveyor conda and Travis conda using the testing version of matplotlib
All 31 tests PASS on Appveyor
https://ci.appveyor.com/project/nickpowersys/yellowbrick/builds/25701049
25/31 tests PASS on Travis
https://www.travis-ci.org/nickpowersys/yellowbrick/jobs/553964607
**In order to ensure that Travis testing succeeds, the regular matplotlib package should not be installed and uninstalled before installing the matplotlib testing package.**
The following are instances of such tests that have been identified to date:
### Labels on the right side
- test_cluster/test_elbow.py
- test_timings (it uses poof instead of finalize)
- test_features/test_rankd.py
- test_rank2d_covariance
- test_rank2d_integrated_numpy
- test_rank2d_integrated_pandas
- test_rank2d_kendalltau
- test_rank2d_pearson
- test_rank2d_spearman
### Only one of multiple subplots has text elements removed
- test_base.py
- test_draw_visualizer_grid
- test_draw_with_cols
- test_draw_with_rows
### Labels for each iso_f1_curve
- test_classifier/test_prcurve.py
- test_binary_probability_decision
- test_custom_iso_f1_scores
- test_multiclass_probability
- test_quick_method
### Labels on the plots
- test_cluster/test_icdm.py
- test_affinity_tsne_no_legend
- test_kmeans_md
- test_quick_method
### Labels on plots besides the main one
- test_regressor/test_residuals.py
- test_residuals_plot
- test_residuals_plot_pandas
- test_residuals_quick_method
- test_features/test_jointplot.py
### Non-text-based differences as well
- test_features/test_jointplot.py
- test_columns_double_index_discrete_y_hist
- test_columns_double_int_index_numpy_no_y_hist
- test_columns_double_str_index_pandas_no_y_hist
- test_columns_none_x_hist
- test_columns_none_x_y_hist
- test_columns_single_int_index_numpy_hist
- test_columns_single_str_index_pandas_hist
### Text on the plot
- test_features/test_radviz.py
- test_integrated_radviz_with_pandas
- test_text/test_freqdist.py
- test_integrated_freqdist
### Text within confusion matrix
- test_classifier/test_confusion_matrix.py
- test_pandas_integration
- test_quick_method
|
closed
|
2019-06-22T17:43:37Z
|
2019-08-28T23:46:39Z
|
https://github.com/DistrictDataLabs/yellowbrick/issues/892
|
[
"type: technical debt"
] |
nickpowersys
| 2
|
streamlit/streamlit
|
data-visualization
| 10,770
|
Left aligned buttons
|
### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
Sometimes, left aligned buttons look better! Especially when with st.navigation in the sidebar imo
Maybe `st.button(..., align=...)`?
<img src="https://github.com/user-attachments/assets/9e8ff614-17e0-4e91-817c-579bb142ef52" width=150>
<img src="https://github.com/user-attachments/assets/b15b7075-c5d8-4404-bd74-49a25b424ec4" width=150>
### Why?
_No response_
### How?
_No response_
### Additional Context
_No response_
|
open
|
2025-03-13T16:52:15Z
|
2025-03-13T17:28:08Z
|
https://github.com/streamlit/streamlit/issues/10770
|
[
"type:enhancement",
"feature:st.download_button",
"feature:st.button",
"feature:st.link_button",
"feature:st.form_submit_button"
] |
sfc-gh-amiribel
| 1
|
globaleaks/globaleaks-whistleblowing-software
|
sqlalchemy
| 3,313
|
CSS not working on subsite
|
**Describe the bug**
I try to design one subsite exactly as my main-site. But it seems that the uploaded .css-file is not working (same file as on the main-site)
**To Reproduce**
Steps to reproduce the behavior:
1. Design CSS-file for your mainsite and export it
2. Import that CSS-file to one of your subsites
**Expected behavior**
With the same CSS-files both sites should look identically?
**Desktop (please complete the following information):**
- OS: Ubuntu
- Edge, Firefox
- Version [e.g. 22]
|
closed
|
2022-11-18T15:19:22Z
|
2022-11-19T15:43:58Z
|
https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3313
|
[] |
cdlh
| 5
|
marshmallow-code/marshmallow-sqlalchemy
|
sqlalchemy
| 571
|
TypeError: SQLAlchemySchemaMeta.get_declared_fields() missing 1 required positional argument: 'dict_cls'
|
Hi, I got the following error when launching an app that uses marshmallow-sqlalchemy:
```pytb
Traceback (most recent call last):
File "/usr/bin/superset", line 5, in <module>
from superset.cli.main import superset
File "/usr/lib/python3.11/site-packages/superset/__init__.py", line 21, in <module>
from superset.app import create_app
File "/usr/lib/python3.11/site-packages/superset/app.py", line 24, in <module>
from superset.initialization import SupersetAppInitializer
File "/usr/lib/python3.11/site-packages/superset/initialization/__init__.py", line 28, in <module>
from flask_appbuilder import expose, IndexView
File "/usr/lib/python3.11/site-packages/flask_appbuilder/__init__.py", line 5, in <module>
from .api import ModelRestApi # noqa: F401
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/flask_appbuilder/api/__init__.py", line 29, in <module>
from marshmallow_sqlalchemy.fields import Related, RelatedList
File "/usr/lib/python3.11/site-packages/marshmallow_sqlalchemy/__init__.py", line 9, in <module>
from .schema import (
File "/usr/lib/python3.11/site-packages/marshmallow_sqlalchemy/schema.py", line 143, in <module>
class SQLAlchemySchema(
File "/usr/lib/python3.11/site-packages/marshmallow/schema.py", line 116, in __new__
klass._declared_fields = mcs.get_declared_fields(
^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: SQLAlchemySchemaMeta.get_declared_fields() missing 1 required positional argument: 'dict_cls'
```
|
closed
|
2024-02-09T16:26:24Z
|
2024-02-09T19:23:22Z
|
https://github.com/marshmallow-code/marshmallow-sqlalchemy/issues/571
|
[
"question"
] |
AlphaJack
| 4
|
s3rius/FastAPI-template
|
fastapi
| 115
|
Add Support for Prisma Python ORM
|
**Why**
Prisma Client Python is fully type safe and offers native support for usage with and without async.
However, the arguably best feature that Prisma Client Python provides is [autocompletion support](https://prisma-client-py.readthedocs.io/en/stable/#auto-completion-for-query-arguments). This makes writing database queries easier than ever!
**Links**
https://prisma-client-py.readthedocs.io/en/stable/
https://prisma-client-py.readthedocs.io/en/stable/getting_started/setup/
|
open
|
2022-07-30T13:31:33Z
|
2022-07-30T15:45:16Z
|
https://github.com/s3rius/FastAPI-template/issues/115
|
[] |
gurbaj5124871
| 2
|
fastapi-users/fastapi-users
|
asyncio
| 1,004
|
Security alert in one of the dependencies
|
## Describe the bug
Security alert in one of the dependencies: PyJWT
Key confusion through non-blocklisted public key formats.
[CVE-2022-29217](https://github.com/advisories/GHSA-ffqj-6fqr-9h24)
Consider changing the version of the PyJWT dependency to 2.4.0
## To Reproduce
None
## Expected behavior
A clear and concise description of what you expected to happen.
## Configuration
None
### FastAPI Users configuration
None
## Additional context
None
|
closed
|
2022-05-26T16:05:33Z
|
2022-05-27T07:52:29Z
|
https://github.com/fastapi-users/fastapi-users/issues/1004
|
[
"bug"
] |
JimScope
| 2
|
plotly/dash
|
data-visualization
| 2,407
|
Mistake/Incorrect value in Error Message
|
I think the error message in this line is incorrect: https://github.com/plotly/dash/blob/dev/dash/_validate.py#L196
Should this line be
```
Expected {len(outi)}, got {len(vi)}
```
instead of
```
Expected {len(vi)}, got {len(outi)}
```
?
My understanding is `outi` is the output_list (aka output spec), while `vi` is the actual output value.
Thank you
|
open
|
2023-02-01T05:19:08Z
|
2024-08-13T19:25:55Z
|
https://github.com/plotly/dash/issues/2407
|
[
"bug",
"P3"
] |
ajyl
| 3
|
allenai/allennlp
|
nlp
| 5,711
|
AllenNLP biased towards BERT
|
<!--
Please fill this template entirely and do not erase any of it.
We reserve the right to close without a response bug reports which are incomplete.
If you have a question rather than a bug, please ask on [Stack Overflow](https://stackoverflow.com/questions/tagged/allennlp) rather than posting an issue here.
-->
## Checklist
<!-- To check an item on the list replace [ ] with [x]. -->
- [ x ] I have verified that the issue exists against the `main` branch of AllenNLP.
- [ x ] I have read the relevant section in the [contribution guide](https://github.com/allenai/allennlp/blob/main/CONTRIBUTING.md#bug-fixes-and-new-features) on reporting bugs.
- [ x ] I have checked the [issues list](https://github.com/allenai/allennlp/issues) for similar or identical bug reports.
- [ x ] I have checked the [pull requests list](https://github.com/allenai/allennlp/pulls) for existing proposed fixes.
- [ x ] I have checked the [CHANGELOG](https://github.com/allenai/allennlp/blob/main/CHANGELOG.md) and the [commit log](https://github.com/allenai/allennlp/commits/main) to find out if the bug was already fixed in the main branch.
- [ ] I have included in the "Description" section below a traceback from any exceptions related to this bug.
- [ x ] I have included in the "Related issues or possible duplicates" section below all related issues and possible duplicate issues (If there are none, check this box anyway).
- [ x ] I have included in the "Environment" section below the name of the operating system and Python version that I was using when I discovered this bug.
- [ x ] I have included in the "Environment" section below the output of `pip freeze`.
- [ x ] I have included in the "Steps to reproduce" section below a minimally reproducible example.
## Description
<!-- Please provide a clear and concise description of what the bug is here. -->
I've started using AllenNLP since 2018, and I have already run **thousands** of NER benchmarks with it...since ELMo, and following with transformers, it's [CrfTagger](https://github.com/allenai/allennlp-models/blob/main/allennlp_models/tagging/models/crf_tagger.py) model has always yielded superior results in every possible benchmark for this task. However, since my research group trained different RoBERTa models for Portuguese, we have been conducting benchmarks comparing them with an existing [BERT model](https://huggingface.co/neuralmind/bert-base-portuguese-cased), but we have been getting inconsistent results compared to other frameworks, such as huggingface's transformers.
Sorted results for AllenNLP grid search on CoNLL2003 using optuna (**all berts' results are better than all the robertas'**):

Sorted results for huggingface's transformers grid search on CoNLL2003 (**all robertas' results are better than all the berts'**):

I originally opened this as a [question on stackoverflow](https://stackoverflow.com/questions/73310991/is-allennlp-biased-towards-bert), as suggested in the issues guidelines (additional details already provided there), but I have failed to discover the problem by myself. I have run several unit tests from AllenNLP, concerning the tokenizers and embedders, and couldn't notice anything wrong, but I'm betting something is definetely wrong in the training process, since the results are so inferior for non-BERT models.
Although I'm reporting details with the current release version, I'd like to point out that I had already run this CoNLL 2003 benchmark with RoBERTa/AllenNLP a long time ago too, so it's not something new. At the time the results for RoBERTa were quite below bert-base, but at the time I just thought RoBERTa wasn't competitive for NER (which is not true at all).
It is expected that the results using AllenNLP are at least as good as the ones obtained using huggingface's framework.
## Related issues or possible duplicates
- [Opened as a question myself](https://github.com/allenai/allennlp/issues/5703)
## Environment
<!-- Provide the name of operating system below (e.g. OS X, Linux) -->
OS: Linux
<!-- Provide the Python version you were using (e.g. 3.7.1) -->
Python version: 3.8.13
<details>
<summary><b>Output of <code>pip freeze</code>:</b></summary>
<p>
<!-- Paste the output of `pip freeze` in between the next two lines below -->
```
aiohttp==3.8.1
aiosignal==1.2.0
alembic==1.8.1
allennlp==2.10.0
allennlp-models==2.10.0
allennlp-optuna==0.1.7
asttokens==2.0.8
async-timeout==4.0.2
attrs==21.2.0
autopage==0.5.1
backcall==0.2.0
base58==2.1.1
blis==0.7.8
bokeh==2.4.3
boto3==1.24.67
botocore==1.27.67
cached-path==1.1.5
cachetools==5.2.0
catalogue==2.0.8
certifi @ file:///opt/conda/conda-bld/certifi_1655968806487/work/certifi
charset-normalizer==2.1.1
click==8.1.3
cliff==4.0.0
cloudpickle==2.2.0
cmaes==0.8.2
cmd2==2.4.2
colorama==0.4.5
colorlog==6.7.0
commonmark==0.9.1
conllu==4.4.2
converters-datalawyer==0.1.10
cvxopt==1.2.7
cvxpy==1.2.1
cycler==0.11.0
cymem==2.0.6
Cython==0.29.32
datasets==2.4.0
debugpy==1.6.3
decorator==5.1.1
deprecation==2.1.0
dill==0.3.5.1
dkpro-cassis==0.7.2
docker-pycreds==0.4.0
ecos==2.0.10
elasticsearch==7.13.0
emoji==2.0.0
en-core-web-sm @ https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.3.0/en_core_web_sm-3.3.0-py3-none-any.whl
entrypoints==0.4
executing==1.0.0
fairscale==0.4.6
filelock==3.7.1
fire==0.4.0
fonttools==4.37.1
frozenlist==1.3.1
fsspec==2022.8.2
ftfy==6.1.1
future==0.18.2
gensim==4.2.0
gitdb==4.0.9
GitPython==3.1.27
google-api-core==2.8.2
google-auth==2.11.0
google-cloud-core==2.3.2
google-cloud-storage==2.5.0
google-crc32c==1.5.0
google-resumable-media==2.3.3
googleapis-common-protos==1.56.4
greenlet==1.1.3
h5py==3.7.0
hdbscan==0.8.28
huggingface-hub==0.8.1
hyperopt==0.2.7
idna==3.3
importlib-metadata==4.12.0
importlib-resources==5.4.0
inceptalytics==0.1.0
iniconfig==1.1.1
ipykernel==6.15.2
ipython==8.5.0
jedi==0.18.1
Jinja2==3.1.2
jmespath==1.0.1
joblib==1.1.0
jsonnet==0.18.0
jupyter-core==4.11.1
jupyter_client==7.3.5
kiwisolver==1.4.4
krippendorff==0.5.1
langcodes==3.3.0
llvmlite==0.39.1
lmdb==1.3.0
lxml==4.9.1
Mako==1.2.2
MarkupSafe==2.1.1
matplotlib==3.5.3
matplotlib-inline==0.1.6
more-itertools==8.12.0
multidict==6.0.2
multiprocess==0.70.13
murmurhash==1.0.8
nest-asyncio==1.5.5
networkx==2.8.6
nltk==3.7
numba==0.56.2
numpy==1.23.3
optuna==2.10.1
osqp==0.6.2.post5
overrides==6.2.0
packaging==21.3
pandas==1.4.4
parso==0.8.3
pathtools==0.1.2
pathy==0.6.2
pbr==5.10.0
pexpect==4.8.0
pickleshare==0.7.5
Pillow==9.2.0
pluggy==1.0.0
preshed==3.0.7
prettytable==3.4.1
promise==2.3
prompt-toolkit==3.0.31
protobuf==3.20.0
psutil==5.9.2
pt-core-news-sm @ https://github.com/explosion/spacy-models/releases/download/pt_core_news_sm-3.3.0/pt_core_news_sm-3.3.0-py3-none-any.whl
ptyprocess==0.7.0
pure-eval==0.2.2
py==1.11.0
py-rouge==1.1
py4j==0.10.9.7
pyannote.core==4.5
pyannote.database==4.1.3
pyarrow==9.0.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycaprio==0.2.1
pydantic==1.8.2
pygamma-agreement==0.5.6
Pygments==2.13.0
pympi-ling==1.70.2
pyparsing==3.0.9
pyperclip==1.8.2
pytest==7.1.3
python-dateutil==2.8.2
pytz==2022.2.1
PyYAML==6.0
pyzmq==23.2.1
qdldl==0.1.5.post2
regex==2022.8.17
requests==2.28.1
requests-toolbelt==0.9.1
responses==0.18.0
rich==12.1.0
rsa==4.9
s3transfer==0.6.0
sacremoses==0.0.53
scikit-learn==1.1.2
scipy==1.9.1
scs==3.2.0
seaborn==0.12.0
sentence-transformers==2.2.2
sentencepiece==0.1.97
sentry-sdk==1.9.8
seqeval==1.2.2
setproctitle==1.3.2
shellingham==1.5.0
shortuuid==1.0.9
simplejson==3.17.6
six==1.16.0
sklearn==0.0
smart-open==5.2.1
smmap==5.0.0
sortedcontainers==2.4.0
spacy==3.3.1
spacy-legacy==3.0.10
spacy-loggers==1.0.3
split-datalawyer==0.1.80
SQLAlchemy==1.4.41
srsly==2.4.4
stack-data==0.5.0
stanza==1.4.0
stevedore==4.0.0
tensorboardX==2.5.1
termcolor==1.1.0
TextGrid==1.5
thinc==8.0.17
threadpoolctl==3.1.0
tokenizers==0.12.1
tomli==2.0.1
toposort==1.7
torch==1.13.0.dev20220911+cu117
torchvision==0.14.0.dev20220911+cu117
tornado==6.2
tqdm==4.64.1
traitlets==5.3.0
transformers==4.21.3
typer==0.4.2
typing_extensions==4.3.0
umap==0.1.1
Unidecode==1.3.4
urllib3==1.26.12
wandb==0.12.21
wasabi==0.10.1
wcwidth==0.2.5
word2number==1.1
xxhash==3.0.0
yarl==1.8.1
zipp==3.8.1
```
</p>
</details>
## Steps to reproduce
I'm attaching some parameters I used for running the CoNLL 2003 grid search.
<details>
<summary><b>Example source:</b></summary>
<p>
<!-- Add a fully runnable example in between the next two lines below that will reproduce the bug -->
```
export BATCH_SIZE=8
export EPOCHS=10
export gradient_accumulation_steps=4
export dropout=0.2
export weight_decay=0
export seed=42
allennlp tune \
optuna_conll2003.jsonnet \
optuna-grid-search-conll2003-hparams.json \
--optuna-param-path optuna-grid-search-conll2003.json \
--serialization-dir /models/conll2003/benchmark_allennlp \
--study-name benchmark-allennlp-models-conll2003 \
--metrics test_f1-measure-overall \
--direction maximize \
--skip-if-exists \
--n-trials $1
```
</p>
[optuna_conll2003.jsonnet](https://github.com/allenai/allennlp/files/9560749/optuna_conll2003.jsonnet.txt)
[optuna-grid-search-conll2003.json](https://github.com/allenai/allennlp/files/9560750/optuna-grid-search-conll2003.json.txt)
[optuna-grid-search-conll2003-hparams.json](https://github.com/allenai/allennlp/files/9560751/optuna-grid-search-conll2003-hparams.json.txt)
</details>
|
open
|
2022-09-13T21:29:12Z
|
2022-10-12T16:47:20Z
|
https://github.com/allenai/allennlp/issues/5711
|
[
"bug",
"Under Development",
"question"
] |
pvcastro
| 12
|
plotly/dash
|
flask
| 3,185
|
dash v3.0.0rc3 - CSS did not apply on `read details`
|
Thank you so much for helping improve the quality of Dash!
We do our best to catch bugs during the release process, but we rely on your help to find the ones that slip through.
**Describe your context**
LOcal development using dash v3.0.0rc3. I used the dash code which has some CSS styling and noticed that Read details did not use the CSS
- replace the result of `pip list | grep dash` below
```
dash 3.0.0rc3
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
- if frontend related, tell us your Browser, Version and OS
- OS: Mac
- Browser Chrome
- Version v3.0.0rc3
**Describe the bug**
Code used
```
# Import packages
from dash import Dash, html, dash_table, dcc, callback, Output, Input
import pandas as pd
import plotly.express as px
# Incorporate data
df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/gapminder2007.csv')
# Initialize the app - incorporate css
external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']
app = Dash(external_stylesheets=external_stylesheets)
# App layout
app.layout = [
html.Div(className='row', children='My First App with Data, Graph, and Controls',
style={'textAlign': 'center', 'color': 'blue', 'fontSize': 30}),
html.Div(className='row', children=[
dcc.RadioItems(options=['pop', 'lifeExp', 'gdpPercap'],
value='lifeExp',
inline=True,
id='my-radio-buttons-final')
]),
html.Div(className='row', children=[
html.Div(className='six columns', children=[
dash_table.DataTable(data=df.to_dict('records'), page_size=11, style_table={'overflowX': 'auto'})
]),
html.Div(className='six columns', children=[
dcc.Graph(figure={}, id='histo-chart-final')
])
])
]
# Add controls to build the interaction
@callback(
Output(component_id='histo-chart-final', component_property='figure'),
Input(component_id='my-radio-buttons-final', component_property='value')
)
def update_graph(col_chosen):
fig = px.histogram(df, x='continent', y=col_chosen, histfunc='avg')
return fig
# Run the app
if __name__ == '__main__':
app.run(debug=True)
```
**Expected behavior**
Read details should apply CSS
**Screenshots**
<img width="998" alt="Image" src="https://github.com/user-attachments/assets/450d6c67-80d3-4244-b3d0-9b52dbd34d37" />
|
closed
|
2025-02-24T21:35:03Z
|
2025-02-28T20:56:54Z
|
https://github.com/plotly/dash/issues/3185
|
[
"bug",
"P1",
"cs"
] |
sadafnajam
| 1
|
BeanieODM/beanie
|
pydantic
| 992
|
[BUG] - Beanie migrations run throws no module named 'some_document'
|
**Describe the bug**
I tried running a migration that follows the [guideline](https://beanie-odm.dev/tutorial/migrations/) but when i run the migration it fails.
I tried putting it in various directory levels(I'm using fastapi so i tried in root, src, inside the package holding the document i want to import and run the migration against).
**To Reproduce**
```python
beanie migrate -uri 'mongodb://user:pwd@localhost:27017' -db 'some_db' -p src/models/primary --distance 1 --no-use-transaction
As well as
beanie migrate -uri 'mongodb://user:pwd@localhost:27017/some_db' -p src/models/primary --distance 1 --no-use-transaction
```
**Expected behavior**
Run the migration with no errors
**Additional context**
Add any other context about the problem here.
|
closed
|
2024-08-08T08:55:33Z
|
2024-10-16T02:41:35Z
|
https://github.com/BeanieODM/beanie/issues/992
|
[
"Stale"
] |
danielxpander
| 3
|
Lightning-AI/pytorch-lightning
|
pytorch
| 20,060
|
Use new state-dict APIs in FSDPStrategy
|
### Description & Motivation
In PyTorch 2.4, the [state-dict context managers we use in FSDPStrategy](https://github.com/Lightning-AI/pytorch-lightning/blob/5829ef8ab3bfa3eb03cfc35e842bef6ebd6bf007/src/lightning/fabric/strategies/fsdp.py#L800-L831) are being deprecated with FutureWarning in PyTorch 2.4. In #20010 I'm suppressing these warnings temporarily.
### Pitch
Use the new `torch.distributed.checkpoint` APIs for PyTorch >= 2.4.
### Alternatives
_No response_
### Additional context
_No response_
cc @borda @justusschock @awaelchli @carmocca
|
open
|
2024-07-08T08:55:58Z
|
2024-07-08T08:58:48Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/20060
|
[
"feature",
"refactor",
"strategy: fsdp"
] |
awaelchli
| 1
|
jupyterhub/repo2docker
|
jupyter
| 946
|
Add option to skip using a Dockerfile
|
<!-- Thank you for contributing. These HTML commments will not render in the issue, but you can delete them once you've read them if you prefer! -->
### Proposed change
Make using the Dockerfile in the root of the repository an option by providing the means to disable Dockerfile buildpack detection.
### Alternative options
Only use Dockerfile.jupyter files to be able to distinct between a docker image delivering just the binary and a Dockerfile that includes all the required notebook dependencies.
### Who would use this feature?
This feature would be used by projects that use a Dockerfile as a means of distributing their binary, but don't have a notebook as primary usage interface.
### How much effort will adding it take?
It would be as simple as including a new configuration key / file that can be read by the detection mechanism in the docker buildpack class. The alternative option is even easier, but I see some backwards compatibility issues for projects already using the Dockerfile customization.
### Who can do this work?
Practically everyone with basic python knowledge. But I reckon we should discuss pros and cons first.
|
closed
|
2020-08-17T11:38:14Z
|
2020-09-07T06:21:41Z
|
https://github.com/jupyterhub/repo2docker/issues/946
|
[
"needs: discussion"
] |
eelkevdbos
| 4
|
gradio-app/gradio
|
machine-learning
| 10,281
|
Dragging in an image a second time will not replace the original image, it will open in a new tab
|
### Describe the bug
Dragging in an image a second time will not replace the original image, it will open in a new tab
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
with gr.Blocks() as demo:
with gr.Row():
input_image = gr.Image(
label="输入图像",
type="pil",
height=600,
width=400,
interactive=True
)
if __name__ == "__main__":
demo.launch(
server_name="0.0.0.0",
server_port=37865
)
```
### Screenshot
_No response_
### Logs
_No response_
### System Info
```shell
gradio 5.6.0
gradio_client 1.4.3
```
### Severity
I can work around it
|
open
|
2025-01-03T02:42:32Z
|
2025-01-05T06:32:07Z
|
https://github.com/gradio-app/gradio/issues/10281
|
[
"bug"
] |
Dazidingo
| 2
|
kennethreitz/responder
|
graphql
| 185
|
Whitenoise-related 404 calls _default_wsgi_app which contains nothing
|
**How to reproduce ?**
- Create basic empty project.
```
import responder
api = responder.API()
```
- Serve it
- Go to /static/anythingthatdoesnotexist
**What happens ?**
```
500 Server Error
Traceback (most recent call last):
File "/Users/rd/.virtualenvs/Evaluate/lib/python3.7/site-packages/uvicorn/middleware/debug.py", line 80, in __call__
await asgi(receive, self.send)
File "/Users/rd/.virtualenvs/Evaluate/lib/python3.7/site-packages/asgiref/wsgi.py", line 41, in __call__
await self.run_wsgi_app(message)
File "/Users/rd/.virtualenvs/Evaluate/lib/python3.7/site-packages/asgiref/sync.py", line 108, in __call__
return await asyncio.wait_for(future, timeout=None)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/tasks.py", line 388, in wait_for
return await fut
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/Users/rd/.virtualenvs/Evaluate/lib/python3.7/site-packages/asgiref/sync.py", line 123, in thread_handler
return self.func(*args, **kwargs)
File "/Users/rd/.virtualenvs/Evaluate/lib/python3.7/site-packages/asgiref/wsgi.py", line 118, in run_wsgi_app
for output in self.wsgi_application(environ, self.start_response):
TypeError: 'NoneType' object is not iterable
```
**What is expected ?**
404, most probably ?
**Environment**
OSX Mojave, Python 3.7.1
|
closed
|
2018-11-03T14:09:16Z
|
2019-02-21T00:42:25Z
|
https://github.com/kennethreitz/responder/issues/185
|
[
"bug"
] |
hartym
| 2
|
tableau/server-client-python
|
rest-api
| 805
|
Update Group
|
Hi,
I was looking through the API documentation and noticed there's no way to update group's name. Is there anyway to go about updating a group's name through `RequestOptions` or something else?
|
closed
|
2021-02-22T19:44:00Z
|
2021-03-05T00:16:16Z
|
https://github.com/tableau/server-client-python/issues/805
|
[] |
mbabatunde
| 4
|
mwaskom/seaborn
|
data-visualization
| 2,763
|
similarity
|
Hello, seaborn provides the calculation of the similarity of two probability density distributions, i.e. the overlap between them.
|
closed
|
2022-03-17T02:13:39Z
|
2022-03-17T02:38:43Z
|
https://github.com/mwaskom/seaborn/issues/2763
|
[] |
OrdinarySK
| 1
|
scikit-learn/scikit-learn
|
python
| 30,377
|
⚠️ CI failed on Linux_free_threaded.pylatest_free_threaded (last failure: Dec 04, 2024) ⚠️
|
**CI is still failing on [Linux_free_threaded.pylatest_free_threaded](https://dev.azure.com/scikit-learn/scikit-learn/_build/results?buildId=72565&view=logs&j=c10228e9-6cf7-5c29-593f-d74f893ca1bd)** (Dec 04, 2024)
- test_csr_polynomial_expansion_index_overflow[csr_array-False-True-2-65535]
- test_csr_polynomial_expansion_index_overflow[csr_array-False-True-3-2344]
|
closed
|
2024-11-30T02:54:23Z
|
2024-12-09T12:45:30Z
|
https://github.com/scikit-learn/scikit-learn/issues/30377
|
[] |
scikit-learn-bot
| 4
|
getsentry/sentry
|
django
| 87,574
|
Internal server error when opening cron monitor details
|
### Environment
SaaS (https://sentry.io/)
### Steps to Reproduce
1. Create a cron monitor
2. Try to open the monitor details
### Expected Result
Being able to view the monitor details page
### Actual Result
The UI runs into an internal server error: [SENTRY-3M37](https://sentry.sentry.io/issues/6191972190/events/latest/?project=1&referrer=latest-event)
```
KeyError
5194751
```
This is affecting a few of the monitors in one of the affected organisations. Information on the organisation and the affected monitors can be found in this [internal ticket. ](https://sentry.zendesk.com/agent/tickets/148094)
### Product Area
Crons
### Link
_No response_
### DSN
_No response_
### Version
_No response_
|
open
|
2025-03-21T10:53:27Z
|
2025-03-21T19:33:15Z
|
https://github.com/getsentry/sentry/issues/87574
|
[
"getsentry/sentry",
"Crons"
] |
rodolfoBee
| 4
|
holoviz/panel
|
jupyter
| 7,515
|
Make Panel Extensions visible
|
Now that Panel extensions start to pop up I believe we should maintain a list somewhere. I thought about `panel-extensions` org. But that is not visible enough.
I think we should have a page similar to our Panel Components page or integrated with the Panel Components page (like Streamlit does).
At first we could just list, describe and link to them. Later we could add thumbnails if we believe it creates value.
----
```markdown
## 📚 Discover Extensions
Here we maintain a list of Panel Extensions maintained inside and outside this organisation.
### Widgets
#### [`panel-material-ui`](https://github.com/panel-extensions/panel-material-ui)
An extension to bring MaterialUI components to Panel.
### Visualizations
#### [`panel-graphic-walker`](https://github.com/panel-extensions/panel-graphic-walker)
A Tableau like *Graphic Walker* pane for use with HoloViz Panel.
### Jupyter Extensions
#### [`jupyter_bokeh`](https://github.com/bokeh/jupyter_bokeh)
A Jupyter extension for rendering Bokeh and Panel content within Jupyter environments.
#### [`ipywidgets_bokeh`](https://github.com/bokeh/ipywidgets_bokeh)
Support for using Ipywidgets within Bokeh and Panel.
### Templates
```
|
open
|
2024-11-23T13:23:22Z
|
2025-01-20T21:48:10Z
|
https://github.com/holoviz/panel/issues/7515
|
[
"type: docs"
] |
MarcSkovMadsen
| 1
|
neuml/txtai
|
nlp
| 559
|
Textractor paragraph level splits at the end of each line
|
I am trying to split a set of PDFs in paragraphs using textractor.
The command works, but it splits the text at the end of each line of the pdf rather than looking for meaningful paragraphs.
Example:
Albania’s greenhouse gas (GHG) mean annual emissions, according to the national inventory
prepared for the 4th National Communication and the final draft of 1st BUR, amounted to 10.8 Mt
CO2e/y in the period 2009-2016. Compared to the rest of Europe, this level of emission is low.
While the level of emissions per capita is 8.7 t CO2e/hab in the EU-27 in 2018, the level of
emission per capita in Albania is 3.5 t CO2e/hab in 2016.
The textractor, paragraph level creates paragraphs whenever he sees the end of line.
I guess the problem is in the pdf reader function.
Thanks,
F
|
closed
|
2023-09-20T15:47:46Z
|
2023-10-04T14:06:06Z
|
https://github.com/neuml/txtai/issues/559
|
[] |
thepinkclimate
| 5
|
waditu/tushare
|
pandas
| 1,699
|
港股通每月成交统计 ggt_monthly 202012后无返回数据
|
#### 问题描述
ggt_monthly接口从202012起不返回数据
```python
pro.ggt_monthly(month="202012")
```
| month | day_buy_amt | day_buy_vol | day_sell_amt | day_sell_vol | total_buy_amt | total_buy_vol | total_sell_amt | total_sell_vol |
|--------:|--------------:|--------------:|---------------:|---------------:|----------------:|----------------:|-----------------:|-----------------:|
| 202012 | 63.49 | 9.43 | 58.38 | 8.84 | 1396.83 | 207.51 | 1284.45 | 194.52 |
```python
pro.ggt_monthly(month="202101")
```
| month | day_buy_amt | day_buy_vol | day_sell_amt | day_sell_vol | total_buy_amt | total_buy_vol | total_sell_amt | total_sell_vol |
|---------|---------------|---------------|----------------|----------------|-----------------|-----------------|------------------|------------------|
```python
pro.ggt_monthly(end_month="202303")
```
| month | day_buy_amt | day_buy_vol | day_sell_amt | day_sell_vol | total_buy_amt | total_buy_vol | total_sell_amt | total_sell_vol |
|--------:|--------------:|--------------:|---------------:|---------------:|----------------:|----------------:|-----------------:|-----------------:|
| 202012 | 63.49 | 9.43 | 58.38 | 8.84 | 1396.83 | 207.51 | 1284.45 | 194.52 |
| 202011 | 72.97 | 10.31 | 56.59 | 8.84 | 1532.47 | 216.61 | 1188.48 | 185.58 |
| 202010 | 62.93 | 9.83 | 38.31 | 7.02 | 943.91 | 147.46 | 574.72 | 105.31 |
| 202009 | 49.17 | 8.22 | 44.34 | 6.94 | 983.42 | 164.33 | 886.87 | 138.78 |
| 202008 | 56.75 | 8.86 | 53.01 | 8.2 | 1191.82 | 186.15 | 1113.25 | 172.16 |
| 202007 | 93.41 | 13.62 | 77.44 | 11.24 | 2055.06 | 299.55 | 1703.7 | 247.2 |
| 202006 | 49.12 | 7.92 | 46.82 | 8.07 | 884.2 | 142.62 | 842.68 | 145.26 |
| 202005 | 53.33 | 8.43 | 49.5 | 8.65 | 959.92 | 151.74 | 890.95 | 155.75 |
| 202004 | 49.45 | 8.62 | 42.45 | 8.57 | 692.35 | 120.65 | 594.29 | 119.95 |
| 202003 | 90.58 | 13.84 | 50.74 | 9.33 | 1992.71 | 304.37 | 1116.21 | 205.23 |
| 202002 | 65.75 | 9.3 | 45.05 | 7.63 | 1315 | 186.06 | 900.94 | 152.55 |
| 202001 | 46.85 | 7.38 | 38.35 | 6.7 | 655.96 | 103.32 | 536.91 | 93.76 |
| 201912 | 37.61 | 5.66 | 27.17 | 5.08 | 752.11 | 113.24 | 543.41 | 101.5 |
#### 版本信息
- Python: 3.8.12
- Tushare: 1.2.89
#### Tushare ID
- 390865
|
open
|
2023-04-03T00:33:02Z
|
2023-04-03T00:33:02Z
|
https://github.com/waditu/tushare/issues/1699
|
[] |
cheng-jiang
| 0
|
piskvorky/gensim
|
nlp
| 2,873
|
Further focus/slim keyedvectors.py module
|
Pre-#2698, `keyedvectors.py` was 2500+ lines, including functionality over-specific to other models, & redundant classes. Post-#2698, with some added generic functionality, it's still over 1800 lines.
It should shed some other grab-bag utility functions that have accumulated, & don't logically fit inside the `KeyedVectors` class.
In particular, the evaluation (analogies, word_ranks) helpers could move to their own module that takes a KV instance as an argument. (If other more-sophisticated evaluations can be contributed, as would be welcome, they should also live alongside those, rather than bloating `KeyedVectors`.)
The `get_keras_embedding` method, as its utilit is narrow to very specific uses, and is conditional on a not-necessarily install package, could go elsewhere too – either a kera-focused utilities module, or even just documentation/example code about how to convert to/from keras from `KeyedVectors.
Some of the more advanced word-vector-**using** calculations, like 'Word Mover's Distance' or 'Soft Cosine SImilarity', could move to method-specific modules that are then better documented/self-contained/optimized, without bloating the generic 'set of vectors' module. (They might be more discoverable, there, as well.)
And finally, some of the existing calculations could be unified/streamlined (especially the two variants of `most_similar()`, and some of the steps shared by multiple operations). My hope would be the module is eventually <1000 lines.
|
open
|
2020-07-06T20:00:39Z
|
2021-03-09T07:59:52Z
|
https://github.com/piskvorky/gensim/issues/2873
|
[] |
gojomo
| 8
|
pyeve/eve
|
flask
| 1,336
|
How to exclude nested relation fields?
|
I have a relation that is embeddable, for example:
```javascript
{
root: true,
embeddable: {
root: false
foo: "bar"
}
}
```
How can I exclude the `embeddable.root` field? I tried with `?embedded={"embeddable": 1}&projection={"embeddable.root": 0}` but it doesn't do anything.
|
closed
|
2019-12-09T23:52:27Z
|
2019-12-21T00:24:56Z
|
https://github.com/pyeve/eve/issues/1336
|
[] |
tanoabeleyra
| 1
|
Yorko/mlcourse.ai
|
pandas
| 593
|
Dead link in README.md
|
The page is not available:
https://github.com/Yorko/mlcourse.ai/wiki/About-the-course-(in-Russian)
|
closed
|
2019-05-18T15:47:47Z
|
2019-05-18T17:58:59Z
|
https://github.com/Yorko/mlcourse.ai/issues/593
|
[] |
i-aztec
| 1
|
ScottfreeLLC/AlphaPy
|
pandas
| 30
|
Transforms Dictionary Error
|
Fix bug for multiple transforms per column, as all but one will be dropped in the Python dictionary.
```
transforms:
date : ['alphapy.transforms', 'extract_bizday']
date : ['alphapy.transforms', 'extract_date']
date : ['alphapy.transforms', 'extract_time']
```
|
closed
|
2020-03-15T00:15:58Z
|
2020-08-23T22:02:59Z
|
https://github.com/ScottfreeLLC/AlphaPy/issues/30
|
[] |
mrconway
| 1
|
wandb/wandb
|
data-science
| 8,834
|
[Feature]: Set custom artifact name in WandbModelCheckpoint
|
### Description
<!--- Describe your feature here --->
Hi,
I've noticed that when using [WandbModelCheckpoint's](https://docs.wandb.ai/ref/python/integrations/keras/wandbmodelcheckpoint/) it's currently not possible to define a custom name for the model artifact. The name is automatically set to `run_{wandb.run.id}_model` as specified in the [_log_ckpt_as_artifact](https://github.com/wandb/wandb/blob/6bc2440183e02633c11fdd291550c5eb9c0b4634/wandb/integration/keras/callbacks/model_checkpoint.py#L154) function.
This presents a challenge when saving models for multiple metrics within a single run. For instance, consider having a WandbModelCheckpoint for different metrics such as loss, recall, precision, F1 score, IoU, Dice, etc. Since the artifact name remains the same across all metrics, the model versioning becomes ineffective. This results in every model for a specific metric being stored under the same artifact name, leading to many versions that aren't metric-specific.
Consequently, identifying the best-performing model for a particular metric can become quite cumbersome. It often requires sifting through log files or exploring the latest model versions.
Here's an example of what the end result looks like:

### Suggested Solution
<!--- Describe your solution here --->
I would like to be able to define the models artifact name when specifying the callbacks for [WandbModelCheckpoint's](https://docs.wandb.ai/ref/python/integrations/keras/wandbmodelcheckpoint/).
|
open
|
2024-11-12T14:45:14Z
|
2024-12-13T09:40:48Z
|
https://github.com/wandb/wandb/issues/8834
|
[
"ty:feature",
"c:artifacts"
] |
GabrielGosden
| 5
|
activeloopai/deeplake
|
tensorflow
| 2,150
|
[FEATURE]
|
## 🚨🚨 Feature Request
- [ ] Related to an existing [Issue](../issues)
- [ ] A new implementation (Improvement, Extension)
### Is your feature request related to a problem?
A clear and concise description of what the problem is. Ex. I have an issue when [...]
### If your feature will improve `HUB`
A clear and concise description of how it will help `HUB`. Please prefer references, if possible [...]
### Description of the possible solution
A clear and concise description of what you want to happen. Add any considered drawbacks.
### An alternative solution to the problem can look like
A clear and concise description of any alternative solutions or features you've considered.
**Teachability, Documentation, Adoption, Migration Strategy**
If you can, explain how users will be able to use this and possibly write out a version the docs.
Maybe a screenshot or design?
|
closed
|
2023-01-30T20:17:42Z
|
2024-09-19T08:45:42Z
|
https://github.com/activeloopai/deeplake/issues/2150
|
[
"enhancement"
] |
AtelLex
| 2
|
BeanieODM/beanie
|
asyncio
| 142
|
VSCode pylance langauge server and its type checker, pyright reports some errors for the code example in doc site
|
beanie ver.: 1.7.0
os: macOS 11.6, m1 machine
VSCode: 1.60.1
VSCode Python extension: v2021.9.1246542782
1. `Indexed`: (related to #10, #59, #105)
```
Illegal type annotation: call expression not allowed Pylance(reportGeneralTypeIssues)
```
2. `"motor_asyncio"` is not a known member of module Pylance(reportGeneralTypeIssues), will be solved by #147
3. `await product.set({Product.name: "Gold bar"})`
a. `"Product"` is not awaitable Pylance(reportGeneralTypeIssues)
b. "set" is not a known member of "None" Pylance(reportOptionalMemberAccess)
Pylance ref:
https://github.com/microsoft/pylance-release/blob/main/DIAGNOSTIC_SEVERITY_RULES.md#diagnostic-severity-rules
To temporarily suppress this errors:
1. within a file, put `# pyright: reportGeneralTypeIssues=warning, reportOptionalMemberAccess=warning`. (or warning-> false) at the top of the file
2. single line: put this at the end of the line `# type: ignore` (PEP484)
<img width="892" alt="截圖 2021-11-17 下午2 17 53" src="https://user-images.githubusercontent.com/5940941/142242169-ea09a7fa-ecf4-4fb8-93d1-f14fd217be44.png">
|
closed
|
2021-11-17T16:45:53Z
|
2023-03-19T02:37:24Z
|
https://github.com/BeanieODM/beanie/issues/142
|
[
"Stale"
] |
grimmerk
| 3
|
piccolo-orm/piccolo
|
fastapi
| 694
|
Unable to connect to the database
|
Hello. I really like Piccolo ORM and would like to use it.
- I am trying to start a new project with "piccolo asgi new".
- I start it up and go to http://localhost:8000/admin/
- I enter my login and password, and then I click to log in
- `An error occurs: OSError: Multiple exceptions: [Errno 111] Connect call failed ('127.0.0.1', 5433), [Errno 99] Cannot assign requested address`.
I am using a postgres image in docker. The database is created in a container and I can connect to it. But piccolo can't see it and every time it displays the message `Unable to connect to the database.` Please help me to solve this problem. Thank you in advance.
dockerfile:
```
FROM python:3.10.7-bullseye
WORKDIR /backend
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONBUFFERED 1
# install system dependencies
RUN apt-get update \
&& apt-get -y install netcat gcc postgresql \
&& apt-get clean
RUN apt-get update
# install python dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt /backend/requirements.txt
RUN pip install -r requirements.txt
COPY . /backend
```
docker-compose:
```
version: '3.8'
services:
server:
build:
context: ./backend
dockerfile: Dockerfile
volumes:
- ./backend/:/backend/
# command: uvicorn app:app --reload --workers 1 --host 0.0.0.0 --port 8000
command: python ./main.py
env_file:
- ./backend/.env
ports:
- 8000:8000
depends_on:
- db
db:
image: postgres:15.1-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- ./backend/.env
ports:
- 5432:5432
adminer:
image: adminer
restart: always
depends_on:
- db
ports:
- 8080:8080
redis:
image: 'redis/redis-stack'
restart: always
ports:
- '6379:6379'
- '8001:8001'
volumes:
postgres_data: #
```
Exeptions:
```
adminer_1 | [Wed Nov 30 09:23:49 2022] PHP 7.4.33 Development Server (http://[::]:8080) started
db_1 |
db_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
db_1 |
db_1 | 2022-11-30 09:23:48.813 UTC [1] LOG: starting PostgreSQL 15.1 on x86_64-pc-linux-musl, compiled by gcc (Alpine 11.2.1_git20220219) 11.2.1 20220219, 64-bit
db_1 | 2022-11-30 09:23:48.813 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2022-11-30 09:23:48.813 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2022-11-30 09:23:48.826 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2022-11-30 09:23:48.837 UTC [23] LOG: database system was shut down at 2022-11-30 09:23:42 UTC
redis_1 | 9:C 30 Nov 2022 09:23:48.764 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1 | 9:C 30 Nov 2022 09:23:48.764 # Redis version=6.2.7, bits=64, commit=00000000, modified=0, pid=9, just started
redis_1 | 9:C 30 Nov 2022 09:23:48.764 # Configuration loaded
db_1 | 2022-11-30 09:23:48.852 UTC [1] LOG: database system is ready to accept connections
redis_1 | 9:M 30 Nov 2022 09:23:48.765 * monotonic clock: POSIX clock_gettime
redis_1 | 9:M 30 Nov 2022 09:23:48.766 # A key '__redis__compare_helper' was added to Lua globals which is not on the globals allow list nor listed on the deny list.
redis_1 | 9:M 30 Nov 2022 09:23:48.767 * Running mode=standalone, port=6379.
redis_1 | 9:M 30 Nov 2022 09:23:48.767 # Server initialized
redis_1 | 9:M 30 Nov 2022 09:23:48.767 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis_1 | 9:M 30 Nov 2022 09:23:48.781 * <search> Redis version found by RedisSearch : 6.2.7 - oss
redis_1 | 9:M 30 Nov 2022 09:23:48.781 * <search> RediSearch version 2.4.16 (Git=HEAD-b10e5644)
redis_1 | 9:M 30 Nov 2022 09:23:48.781 * <search> Low level api version 1 initialized successfully
redis_1 | 9:M 30 Nov 2022 09:23:48.782 * <search> concurrent writes: OFF, gc: ON, prefix min length: 2, prefix max expansions: 200, query timeout (ms): 500, timeout policy: return, cursor read size: 1000, cursor max idle (ms): 300000, max doctable size: 1000000, max number of search results: 10000, search pool size: 20, index pool size: 8,
redis_1 | 9:M 30 Nov 2022 09:23:48.783 * <search> Initialized thread pool!
redis_1 | 9:M 30 Nov 2022 09:23:48.783 * <search> Enabled diskless replication
redis_1 | 9:M 30 Nov 2022 09:23:48.783 * <search> Enabled role change notification
redis_1 | 9:M 30 Nov 2022 09:23:48.784 * Module 'search' loaded from /opt/redis-stack/lib/redisearch.so
redis_1 | 9:M 30 Nov 2022 09:23:48.805 * <graph> Starting up RedisGraph version 2.8.20.
redis_1 | 9:M 30 Nov 2022 09:23:48.810 * <graph> Thread pool created, using 12 threads.
redis_1 | 9:M 30 Nov 2022 09:23:48.810 * <graph> Maximum number of OpenMP threads set to 12
redis_1 | 9:M 30 Nov 2022 09:23:48.811 * Module 'graph' loaded from /opt/redis-stack/lib/redisgraph.so
redis_1 | 9:M 30 Nov 2022 09:23:48.814 * <timeseries> RedisTimeSeries version 10617, git_sha=c32b0cb0d8ebc1d31b74c624b8b0424628ff7887
redis_1 | 9:M 30 Nov 2022 09:23:48.814 * <timeseries> Redis version found by RedisTimeSeries : 6.2.7 - oss
redis_1 | 9:M 30 Nov 2022 09:23:48.814 * <timeseries> loaded default CHUNK_SIZE_BYTES policy: 4096
redis_1 | 9:M 30 Nov 2022 09:23:48.814 * <timeseries> loaded server DUPLICATE_POLICY: block
redis_1 | 9:M 30 Nov 2022 09:23:48.814 * <timeseries> Setting default series ENCODING to: compressed
redis_1 | 9:M 30 Nov 2022 09:23:48.815 * <timeseries> Detected redis oss
redis_1 | 9:M 30 Nov 2022 09:23:48.816 * <timeseries> Enabled diskless replication
redis_1 | 9:M 30 Nov 2022 09:23:48.816 * Module 'timeseries' loaded from /opt/redis-stack/lib/redistimeseries.so
redis_1 | 9:M 30 Nov 2022 09:23:48.818 * <ReJSON> version: 20200 git sha: 7584706 branch: HEAD
redis_1 | 9:M 30 Nov 2022 09:23:48.818 * <ReJSON> Exported RedisJSON_V1 API
redis_1 | 9:M 30 Nov 2022 09:23:48.818 * <ReJSON> Exported RedisJSON_V2 API
redis_1 | 9:M 30 Nov 2022 09:23:48.818 * <ReJSON> Enabled diskless replication
redis_1 | 9:M 30 Nov 2022 09:23:48.818 * <ReJSON> Created new data type 'ReJSON-RL'
redis_1 | 9:M 30 Nov 2022 09:23:48.818 * Module 'ReJSON' loaded from /opt/redis-stack/lib/rejson.so
redis_1 | 9:M 30 Nov 2022 09:23:48.818 * <search> Acquired RedisJSON_V1 API
redis_1 | 9:M 30 Nov 2022 09:23:48.818 * <graph> Acquired RedisJSON_V1 API
redis_1 | 9:M 30 Nov 2022 09:23:48.819 * <bf> RedisBloom version 2.2.18 (Git=8b6ee3b)
redis_1 | 9:M 30 Nov 2022 09:23:48.819 * Module 'bf' loaded from /opt/redis-stack/lib/redisbloom.so
redis_1 | 9:M 30 Nov 2022 09:23:48.820 * Ready to accept connections
server_1 | INFO: Will watch for changes in these directories: ['/backend']
server_1 | INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
server_1 | INFO: Started reloader process [1] using StatReload
server_1 | INFO: Started server process [8]
server_1 | INFO: Waiting for application startup.
server_1 | /usr/local/lib/python3.10/asyncio/events.py:80: Warning: Unable to fetch server version: Multiple exceptions: [Errno 111] Connect call failed ('127.0.0.1', 5433), [Errno 99] Cannot assign requested address
server_1 | File "/backend/piccolo_conf.py", line 6, in <module>
server_1 | DB = PostgresEngine(
server_1 | File "/usr/local/lib/python3.10/site-packages/piccolo/engine/postgres.py", line 290, in __init__
server_1 | super().__init__()
server_1 | File "/usr/local/lib/python3.10/site-packages/piccolo/engine/base.py", line 28, in __init__
server_1 | run_sync(self.prep_database())
server_1 | File "/usr/local/lib/python3.10/site-packages/piccolo/utils/sync.py", line 24, in run_sync
server_1 | return future.result()
server_1 | File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 458, in result
server_1 | return self.__get_result()
server_1 | File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
server_1 | raise self._exception
server_1 | File "/usr/local/lib/python3.10/concurrent/futures/thread.py", line 58, in run
server_1 | result = self.fn(*self.args, **self.kwargs)
server_1 | File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run
server_1 | return loop.run_until_complete(main)
server_1 | File "/usr/local/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete
server_1 | return future.result()
server_1 | File "/usr/local/lib/python3.10/site-packages/piccolo/engine/postgres.py", line 330, in prep_database
server_1 | await self._run_in_new_connection(
server_1 | File "/usr/local/lib/python3.10/site-packages/piccolo/engine/postgres.py", line 432, in _run_in_new_connection
server_1 | connection = await self.get_new_connection()
server_1 | File "/usr/local/lib/python3.10/site-packages/piccolo/engine/postgres.py", line 389, in get_new_connection
server_1 | return await asyncpg.connect(**self.config)
server_1 | File "/usr/local/lib/python3.10/site-packages/asyncpg/connection.py", line 2092, in connect
server_1 | return await connect_utils._connect(
server_1 | File "/usr/local/lib/python3.10/site-packages/asyncpg/connect_utils.py", line 895, in _connect
server_1 | raise last_error
server_1 | File "/usr/local/lib/python3.10/site-packages/asyncpg/connect_utils.py", line 881, in _connect
server_1 | return await _connect_addr(
server_1 | File "/usr/local/lib/python3.10/site-packages/asyncpg/connect_utils.py", line 773, in _connect_addr
server_1 | return await __connect_addr(params, timeout, True, *args)
server_1 | File "/usr/local/lib/python3.10/site-packages/asyncpg/connect_utils.py", line 825, in __connect_addr
server_1 | tr, pr = await compat.wait_for(connector, timeout=timeout)
server_1 | File "/usr/local/lib/python3.10/site-packages/asyncpg/compat.py", line 56, in wait_for
server_1 | return await asyncio.wait_for(fut, timeout)
server_1 | File "/usr/local/lib/python3.10/asyncio/tasks.py", line 445, in wait_for
server_1 | return fut.result()
server_1 | File "/usr/local/lib/python3.10/site-packages/asyncpg/connect_utils.py", line 684, in _create_ssl_connection
server_1 | tr, pr = await loop.create_connection(
server_1 | File "/usr/local/lib/python3.10/asyncio/base_events.py", line 1072, in create_connection
server_1 | raise OSError('Multiple exceptions: {}'.format(
server_1 | OSError: Multiple exceptions: [Errno 111] Connect call failed ('127.0.0.1', 5433), [Errno 99] Cannot assign requested address
db_1 | 2022-11-30 09:28:48.847 UTC [21] LOG: checkpoint starting: time
db_1 | 2022-11-30 09:28:48.865 UTC [21] LOG: checkpoint complete: wrote 3 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.006 s, sync=0.002 s, total=0.018 s; sync files=2, longest=0.001 s, average=0.001 s; distance=0 kB, estimate=0 kB
```
|
closed
|
2022-11-30T09:42:33Z
|
2022-12-01T11:41:31Z
|
https://github.com/piccolo-orm/piccolo/issues/694
|
[] |
NikSan3452
| 5
|
litestar-org/litestar
|
pydantic
| 3,181
|
Enhancement: Harmonize example generation
|
### Summary
The current OpenAPI example generation is a bit of a mixed bag, and this issue collects the various separate issues and documents some extra needs.
---
1. ~Support `ResponseSpec(examples=...)` for custom response examples~
~This is https://github.com/litestar-org/litestar/issues/3068 (merged)~
---
The example generation is _turned off by default_, but can be enabled by `Litestar(openapi_config=OpenAPIConfig(create_examples=...))`.
Yet, with example generation **off**:
2. Examples are generated for _error responses_:
```json
"examples": {
"NotAuthorized": {
"value": {
"status_code": 401,
"detail": "Missing or invalid credentials",
"extra": {}
}
}
}
}
```
This is not tunable, they are always there. Would expect these to **not** exist by default (with example generation turned off).
3. ~Examples are generated for response models:~
~Even with `Litestar(openapi_config=OpenAPIConfig(create_examples=False))` (default), OpenAPI `$.components.schemas` models get examples.~
~Turning `ResponseSpec(..., generate_examples=False)` disables this, but you'd expect the global flag to do the same first.~
This one was invalid report, it actually worked just fine. Ignore this bullet.
---
Once you actually set `Litestar(openapi_config=OpenAPIConfig(create_examples=True))`, you will get examples for:
- Query, Path, Header, Cookie parameters
- Request bodies
- Response bodies
3. It's possible to locally turn example generation off per `ResponseSpec` but not per `Parameter`
Allow `Paremeter(generate_examples=False)` to sync with `ResponseSpec` support.
---
4. The generated examples are inserted as both OpenAPI and JSON schema examples, ie:
```
{
"name": "path_arg",
"in": "path",
"schema": {
"type": "string",
"examples": { <------- HERE
"path_arg-example-1": {
"value": "EXAMPLE_VALUE"
}
}
},
"required": true,
"deprecated": false,
"allowEmptyValue": false,
"allowReserved": false,
"examples": { <------- HERE
"path_arg-example-1": {
"value": "EXAMPLE_VALUE"
}
}
}
```
It _maybe_ feels a bit unnecessary to do both, when OpenAPI ones should suffice.
Example generation could be a toggle instead of a flag. It could support 4 modes:
1. OpenAPI schema only
2. JSON schema only
3. Both JSON + OpenAPI
4. None
Litestar uses (3), generating examples in both. Even if this toggle doesn't get implemented, I believe it's good to identify the case here.
This is https://github.com/litestar-org/litestar/issues/3057.
---
5. ~The order of OpenAPI schema changes between executions~
~This is https://github.com/litestar-org/litestar/issues/3059 (merged).~
---
6. Define examples in Pydantic (and Msgspec, dataclass, TypedDict) models
This already works for Pydantic's `Field(examples=[...])`, but not for `ConfigDict(json_schema_extra=...)`. There should also be a mechanism to define the examples _for the model, within the model_ (not only per field). (It was deemed somewhere that Pydantic's `json_schema_extra` wouldn't be supported, but there needs to be _some_ mechanism for this, like for Msgspec, dataclass, et al).
Support for declaring query/path/header/cookie arguments as models is tracked in https://github.com/litestar-org/litestar/issues/2015. This is probably related, once there's full support for those, also defining examples in the models probably gets supported?
---
7. ~`dict[str, Any]` and `None` trigger example generation always~
~This is https://github.com/litestar-org/litestar/issues/3069 (merged)~
### Basic Example
_No response_
### Drawbacks and Impact
_No response_
### Unresolved questions
_No response_
|
open
|
2024-03-08T14:57:24Z
|
2025-03-20T15:54:28Z
|
https://github.com/litestar-org/litestar/issues/3181
|
[
"Enhancement",
"Great Writeup"
] |
tuukkamustonen
| 8
|
chmp/ipytest
|
pytest
| 25
|
Cannot find @given for pytest-bdd
|
When executing pytest_bdd tests, the definitions are not found. I have tried both the `ipytest.run()` as well as `%%run_pytest[clean]` but the step definitions are not identified in the file.
> pytest_bdd.exceptions.StepDefinitionNotFoundError: Step definition is not found:
The following is a straightforward Gherkin feature file:
```
Feature: DuckDuckGo Web Browsing
Background:
Given the DuckDuckGo home page is displayed
Scenario: Basic DuckDuckGo Search
When the user searches for "panda"
Then results are shown for "panda"
```
and the steps code:
```
import pytest_bdd
import pytest
from selenium.webdriver import Firefox
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.firefox.options import Options
options = Options()
options.headless = True
DUCKDUCKGO_HOME = 'https://duckduckgo.com/'
@pytest.fixture
def browser():
driver = Firefox(options=options)
driver.implicitly_wait(10)
yield driver
driver.quit()
@pytest_bdd.scenario('websearch.feature','Basic DuckDuckGo Search', features_base_dir='', strict_gherkin=False)
def test_websearch():
pass
@pytest_bdd.given('the DuckDuckGo home page is displayed')
def ddg_home(browser):
browser.get(DUCKDUCKGO_HOME)
@pytest_bdd.when(pytest_bdd.parsers.parse('the user searches for "{phrase}"'))
def search_phrase(browser, phrase):
search_input = browser.find_element_by_id('search_form_input_homepage')
search_input.send_keys(phrase + Keys.RETURN)
@pytest_bdd.then(pytest_bdd.parsers.parse('results are shown for "{phrase}"'))
def search_results(browser, phrase):
links_div = browser.find_element_by_id('links')
assert len(links_div.find_elements_by_xpath('//div')) > 0
search_input = browser.find_element_by_id('search_form_input')
assert search_input.get_attribute('value') == "phrase"
```
On the other hand, running these from the command-line using pytest works perfectly fine. Has anyone experienced this before?
|
closed
|
2019-10-16T01:11:18Z
|
2019-11-19T21:39:05Z
|
https://github.com/chmp/ipytest/issues/25
|
[] |
sunbiz
| 2
|
Yorko/mlcourse.ai
|
data-science
| 719
|
Typo in the feature naming in Reduction Impurity counting
|
In the book, [Feature importance page](https://mlcourse.ai/book/topic05/topic5_part3_feature_importance.html) there is a typo in the feature name. One of the chosen should be "Petal length (cm)".
<img width="767" alt="image" src="https://user-images.githubusercontent.com/17138883/189652317-d999f0a6-43bc-4b74-99c7-a3b0ba1a117d.png">
|
closed
|
2022-09-12T12:26:14Z
|
2022-09-13T23:01:01Z
|
https://github.com/Yorko/mlcourse.ai/issues/719
|
[] |
aulasau
| 1
|
matplotlib/mplfinance
|
matplotlib
| 458
|
Feature Request: Facilitate parsing of OHLCV data (facing opaque `TypeError: Expect data.index as DatetimeIndex`)
|
Problem:
I'm trying to plot OHLCV data from CoinBase API, however I'm getting an error without a clear lead to a fix.
I bet others will get stuck here too.
Requested solution:
So it would be great if either the lib handled this dataformat or had some documentation on how to achieve the correct format.
Data:
```
[{'open': 0.005846639252602948,
'high': 0.0062519524982233106,
'low': 0.005457133404836241,
'close': 0.006138245355734082,
'volume': 43409.74,
'market_cap': 1401259.98,
'timestamp': '2021-10-02T23:59:59.999Z'},
{'open': 0.006134242128329189,
'high': 0.0063679709676487005,
'low': 0.006059241124035879,
'close': 0.006222796245534013,
'volume': 8918.02,
'market_cap': 1420561.55,
'timestamp': '2021-10-03T23:59:59.999Z'},
{'open': 0.006223039734629446,
'high': 0.0062474505752106915,
'low': 0.005435646019963704,
'close': 0.005665330087500413,
'volume': 61171.43,
'market_cap': 1293301.24,
'timestamp': '2021-10-04T23:59:59.999Z'},
]
```
Attempt:
```
df_quote_data = pd.DataFrame(quote_data)
import mplfinance as mpf
mpf.plot(df_quote_data)
```
Error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/var/folders/nj/rqdlt_hs4k5dwr5xrpgmn0kh0000gn/T/ipykernel_22891/1573098417.py in <module>
1 import mplfinance as mpf
----> 2 mpf.plot(df_quote_data)
~/Desktop/CMC/.venv/lib/python3.10/site-packages/mplfinance/plotting.py in plot(data, **kwargs)
296 config['type'] = _get_valid_plot_types(config['type'])
297
--> 298 dates,opens,highs,lows,closes,volumes = _check_and_prepare_data(data, config)
299
300 config['xlim'] = _check_and_convert_xlim_configuration(data, config)
~/Desktop/CMC/.venv/lib/python3.10/site-packages/mplfinance/_arg_validators.py in _check_and_prepare_data(data, config)
28
29 if not isinstance(data.index,pd.core.indexes.datetimes.DatetimeIndex):
---> 30 raise TypeError('Expect data.index as DatetimeIndex')
31
32 if (len(data.index) > config['warn_too_much_data'] and
TypeError: Expect data.index as DatetimeIndex
```
Attempts:
1. Converting datetime to unix timestamp:
```
timestamps = df_quote_data['timestamp']
dates = pd.to_datetime(timestamps)
ts = (dates - pd.Timestamp("1970-01-01T00:00:00.000Z")) // pd.Timedelta('1s')
df_quote_data['timestamp'] = ts
```
2. Renaming cols
```
df_quote_data.rename(
columns={
'open': 'Open',
'close': 'Close',
'high': 'High',
'low': 'Low',
'timestamp': 'Date'
},
inplace=True,
errors='raise'
)
```
3. Cropping unwanted cols and reordering
```
df = df_quote_data[['timestamp', 'open', 'high', 'low', 'close']]
```
Even all 3 together doesn't crack it.
|
closed
|
2021-10-29T17:44:30Z
|
2021-10-29T20:32:16Z
|
https://github.com/matplotlib/mplfinance/issues/458
|
[
"enhancement"
] |
p-i-
| 3
|
ultralytics/ultralytics
|
machine-learning
| 19,357
|
Train and val losses became "NaN" but metrics do not update accordingly.
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
This seems that this is slightly related to #18521
During training several models (yolov8, v9, v10, 11) for a custom dataset with different configurations, the train and val losses became NaN for several of the possible combinations. However, that is not the problem here, but how the monitoring and early stopping works with it. All training configurations have `optimizer=Adam`, `epochs=300`, `patience=30` and `AMP=True`. I'm using a single V100-SXM2-32GB for training.
There are two different cases that I've observed:
Case 1: Losses become NaN, the best metrics are acquired just before the losses became NaN. The validation metrics do not change at all and are considered to be improving all the time. In this case, the training continues until the number of `epochs` are acquired and early stopping doesn't trigger. Both resulting `best.pt` and `last.pt` are useless as they are full of NaN.
Case 2: Losses become NaN, the best metrics are acquired earlier. In this case, the training continues until the `patience` triggers. The resulting `best.pt` is useful, `last.pt` is not.
What should happen in both cases: When the losses are NaN, the validation metrics should update to NaN, zero or something like that so early stopping would trigger and at least the `best.pt` is an usable model.
I assume that the issue is related to how metrics are updated. Models with NaN losses do not predict anything so the resulting metrics have nothing to update.
Environment:
ultralytics 8.3.74, Python-3.12.8

### Additional
_No response_
|
closed
|
2025-02-21T08:40:28Z
|
2025-02-25T14:19:57Z
|
https://github.com/ultralytics/ultralytics/issues/19357
|
[
"enhancement",
"question",
"fixed",
"detect"
] |
mayrajeo
| 13
|
plotly/dash
|
jupyter
| 2,252
|
[BUG] Deprecation warnings when running basic test in dash documentation
|
```
dash 2.6.2
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
**Describe the bug**
When running the basic [dash test](https://dash.plotly.com/testing#example---basic-test) from the documentation, I get the following warning.
```
tests/test_website.py::test_001_child_with_0
C:\Users\ethopon\AppData\Local\pypoetry\Cache\virtualenvs\poetry-demo-VBrwyLAy-py3.10\lib\site-packages\flask\scaffold.py:50: DeprecationWarning:
'before_first_request' is deprecated and will be removed in Flask 2.3. Run setup code while creating the application instead.
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
```
**Expected behavior**
I expect to get no warnings.
|
closed
|
2022-09-27T14:35:07Z
|
2022-10-17T14:45:26Z
|
https://github.com/plotly/dash/issues/2252
|
[] |
prokie
| 0
|
Kludex/mangum
|
asyncio
| 212
|
How does Lambda maintains the state of the application?
|
My question is more about Lambda and less about Mangum itself. I have been lately struggling with this.
So Lambda invokes a chunk of code every time it is invoked. And Mangum makes it possible to route multiple paths to same Lambda instead of per lambda per route.
My question is.. **Is the FastAPI server always running in the background between the invocation of Lambda? Or it is started every time is invoked?**
I am asking this because, what if I have a global state which I use over lifetime of my application? Will it be reset between the invocation of the Lambda?
|
closed
|
2021-12-19T14:33:53Z
|
2021-12-20T10:55:18Z
|
https://github.com/Kludex/mangum/issues/212
|
[] |
santosh
| 0
|
CorentinJ/Real-Time-Voice-Cloning
|
python
| 292
|
Add torch to requirements.txt
|
ModuleNotFoundError: No module named 'torch'
|
closed
|
2020-03-04T01:19:44Z
|
2020-07-04T22:22:13Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/292
|
[] |
julianofischer
| 2
|
tensorflow/tensor2tensor
|
deep-learning
| 1,857
|
Tensorflow Requirements
|
Hello, I love what Tensor2Tensor is doing.
I have a general question: what is the recommanded Tensorflow version? The turorials are unsing TF1:
https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/notebooks/Transformer_translate.ipynb
Further more, If i use the Terminal, datagen gen requires Datasets from TF2.
```
t2t-datagen \
--data_dir=$DATA_DIR \
--tmp_dir=$TMP_DIR \
--problem=$PROBLEM \
--t2t_usr_dir=$USR_DIR
```
but after training, then if i try to predict using...
```
t2t-trainer \
--data_dir=$DATA_DIR \
--problem=$PROBLEM \
--model=$MODEL \
--hparams_set=$HPARAMS \
--output_dir=$TRAIN_DIR
```
... I get an TF1 related error:
`module 'tensorflow' has no attribute 'logging'`
Am I missing something obvios?
Thank you in advance!
|
open
|
2020-10-17T07:52:08Z
|
2021-03-06T04:43:41Z
|
https://github.com/tensorflow/tensor2tensor/issues/1857
|
[] |
PatternAlpha
| 1
|
ploomber/ploomber
|
jupyter
| 241
|
Better error messages when SQL assertions fail
|
Currently, functions to test SQL relations just return True/False, this isn't great for debugging, we'd like to get the observations (at least a few of them) that failed, so we can debug. This implies running a second query and pulling observations out of the db, so let's make it optional.
See `numpy.testing` module
|
closed
|
2020-09-04T20:55:02Z
|
2022-05-08T17:54:16Z
|
https://github.com/ploomber/ploomber/issues/241
|
[] |
edublancas
| 1
|
Netflix/metaflow
|
data-science
| 2,253
|
`metadata()` doesn’t reflect runtime changes to `METAFLOW_*` environment variables
|
I'm experiencing an issue when trying to switch the metadata provider as shown in [this example](https://docs.metaflow.org/metaflow/client#metadata-provider). The problem occurs if the [required `METAFLOW_*` environment variables](https://docs.outerbounds.com/engineering/deployment/aws-managed/cloudformation/#additional-configuration) aren't set before Metaflow initializes.
Specifically, I need to set `METAFLOW_PROFILE` (and optionally `METAFLOW_HOME`) before any initialization happens. Once Metaflow is initialized, changes to these environment variables are ignored.
While I understand that changing `METAFLOW_HOME` at runtime might not be supported, it would be a helpful feature. The [metadata()](https://docs.metaflow.org/api/client#metadata) function suggests it's possible to switch metadata providers dynamically, but in practice, runtime changes to `METAFLOW_*` variables have no effect.
Did I miss something, or is this the expected behavior?
|
closed
|
2025-02-10T17:41:43Z
|
2025-02-11T19:42:46Z
|
https://github.com/Netflix/metaflow/issues/2253
|
[] |
dennismoe
| 5
|
horovod/horovod
|
machine-learning
| 3,597
|
mpirun command stuck on warning
|
**Setup:**
I have 2 VMs each with 1 GPU.
I have horovod installed on both VMs.
The training script exists on both VMs
**From my first VM I run the following command:**
/path/to/mpirun -np 2 \
-H {VM_1_IP}:1,{VM_2_IP}:1 \
-bind-to none -map-by slot \
-x NCCL_DEBUG=INFO -x LD_LIBRARY_PATH -x PATH \
-mca pml ob1 -mca btl ^openib \
python training.py
**I get the following message:**
WARNING: Open MPI accepted a TCP connection from what appears to be a
another Open MPI process but cannot find a corresponding process
entry for that peer.
This attempted connection will be ignored; your MPI job may or may not
continue properly.
Local host: VM_2_IP
PID: {4_digit_number}
And then nothing happens...
What am I doing wrong? Any tips appreciated!
|
closed
|
2022-07-08T15:58:38Z
|
2022-07-08T20:21:17Z
|
https://github.com/horovod/horovod/issues/3597
|
[] |
bluepra
| 1
|
pallets/quart
|
asyncio
| 12
|
Flask-socket-io with quart server?
|
is it possible to have something like Flask-socket-io in quart? Would make migration to quart easier.
|
closed
|
2018-03-14T17:38:35Z
|
2022-07-07T00:22:50Z
|
https://github.com/pallets/quart/issues/12
|
[] |
juntiedt2
| 2
|
ultrafunkamsterdam/undetected-chromedriver
|
automation
| 1,928
|
[NODRIVER] Cross-origin iframes not working
|
Hello, currently it seems that cross-origin iframes are not interactable by `select_all`, `find`. Would love to be able to interact with this, and I'm hoping someone is able to provide some assistance with this (I've checked all current issues, but I was not able to find a solution that works).
Below is a code-snippet in order to reproduce this.
```
import nodriver
import os
async def main():
config = nodriver.Config()
config.add_argument('--disable-web-security')
config.user_data_dir = os.path.join(os.getcwd(), 'test')
browser = await nodriver.start(config=config)
page = await browser.get('https://csreis.github.io/tests/cross-site-iframe.html')
cross_site_button = await page.find("Go cross-site (complex page)", best_match=True)
await cross_site_button.mouse_click()
search = await page.select_all('p', include_frames=True)
print('Total number of p tags:', len(search))
await page.sleep(60 * 60)
if __name__ == "__main__":
nodriver.loop().run_until_complete(main())
```
|
closed
|
2024-06-21T17:06:40Z
|
2024-08-21T23:11:28Z
|
https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1928
|
[] |
RandomStrangerOnTheInternet
| 5
|
freqtrade/freqtrade
|
python
| 11,273
|
Binance Futures - Multi asset mode
|
Hi!
Because of regulations, I can only choose "multi asset mode" in futures, currently it is not supported by the program.
Would like to ask, whether it is expected to be implemented in the future, or not.
Thanks in advance, have a nice day.

|
closed
|
2025-01-22T11:28:14Z
|
2025-01-22T12:31:06Z
|
https://github.com/freqtrade/freqtrade/issues/11273
|
[
"Question",
"Non-spot"
] |
MrHumanRebel
| 1
|
microsoft/nni
|
tensorflow
| 5,757
|
There is an if else statement in the model I built myself, and an error will appear when the model is accelerated during model pruning. How should I solve it?
|
**Describe the issue**:
**Environment**:
- NNI version:
- Training service (local|remote|pai|aml|etc):
- Client OS:
- Server OS (for remote mode only):
- Python version:
- PyTorch/TensorFlow version:
- Is conda/virtualenv/venv used?:
- Is running in Docker?:
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**:
|
open
|
2024-03-13T08:53:49Z
|
2024-03-13T08:53:49Z
|
https://github.com/microsoft/nni/issues/5757
|
[] |
xxiayy
| 0
|
LibrePhotos/librephotos
|
django
| 1,036
|
Backend container doesnt start with latest container, ModuleNotFoundError: No module named 'pycocotools'
|
Use SECRET_KEY from env
Traceback (most recent call last):
File "/code/manage.py", line 31, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python3.11/dist-packages/django/core/management/__init__.py", line 442, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python3.11/dist-packages/django/core/management/__init__.py", line 416, in execute
django.setup()
File "/usr/local/lib/python3.11/dist-packages/django/__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "/usr/local/lib/python3.11/dist-packages/django/apps/registry.py", line 116, in populate
app_config.import_models()
File "/usr/local/lib/python3.11/dist-packages/django/apps/config.py", line 269, in import_models
self.models_module = import_module(models_module_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/code/api/models/__init__.py", line 1, in <module>
from api.models.album_auto import AlbumAuto
File "/code/api/models/album_auto.py", line 6, in <module>
from api.models.person import Person
File "/code/api/models/person.py", line 8, in <module>
from api.models.photo import Photo
File "/code/api/models/photo.py", line 21, in <module>
from api.im2txt.sample import im2txt
File "/code/api/im2txt/sample.py", line 23, in <module>
vocab = pickle.load(f)
^^^^^^^^^^^^^^
File "/code/api/im2txt/build_vocab.py", line 5, in <module>
from pycocotools.coco import COCO
ModuleNotFoundError: No module named 'pycocotools'
|
closed
|
2023-09-28T18:42:17Z
|
2023-09-29T09:43:59Z
|
https://github.com/LibrePhotos/librephotos/issues/1036
|
[
"bug"
] |
nowheretobefound
| 1
|
collerek/ormar
|
sqlalchemy
| 885
|
save_related doesn't work if id is uuid
|
**Describe the bug**
If you use Model.save_related and the model has a uuid pk instead of an int it doent save anything
(The save itself returns the numer of rows saved correctly)
**To Reproduce**
Copy code example of https://collerek.github.io/ormar/models/methods/#save_related
Code works as expected.
replace id with `id: UUID4 = ormar.UUID(primary_key=True, default=uuid4)`
Now the get() raises `ormar.exceptions.NoMatch`
**Expected behavior**
Expect the same behavior as with int pk.
**Versions (please complete the following information):**
- Database backend used postgress
- Python version python:3.10.8-slim-bullseye (docker image)
- `ormar` version 0.11.3
- `pydantic` version 1.10.2
- if applicable `fastapi` version 0.85.0
|
closed
|
2022-10-18T18:09:02Z
|
2022-10-31T16:45:39Z
|
https://github.com/collerek/ormar/issues/885
|
[
"bug"
] |
eloi-martinez-qida
| 2
|
hankcs/HanLP
|
nlp
| 1,349
|
1.6.2版本的尝试下载很多次了,就是下载不了,请问有下载过的吗?
|
<!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [ ] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:
我使用的版本是:
<!--以上属于必填项,以下可自由发挥-->
## 我的问题
<!-- 请详细描述问题,越详细越可能得到解决 -->
## 复现问题
<!-- 你是如何操作导致产生问题的?比如修改了代码?修改了词典或模型?-->
### 步骤
1. 首先……
2. 然后……
3. 接着……
### 触发代码
```
public void testIssue1234() throws Exception
{
CustomDictionary.add("用户词语");
System.out.println(StandardTokenizer.segment("触发问题的句子"));
}
```
### 期望输出
<!-- 你希望输出什么样的正确结果?-->
```
期望输出
```
### 实际输出
<!-- HanLP实际输出了什么?产生了什么效果?错在哪里?-->
```
实际输出
```
## 其他信息
<!-- 任何可能有用的信息,包括截图、日志、配置文件、相关issue等等。-->
|
closed
|
2019-12-10T08:37:44Z
|
2020-01-01T10:48:00Z
|
https://github.com/hankcs/HanLP/issues/1349
|
[
"ignored"
] |
Kaili-gao
| 1
|
dgtlmoon/changedetection.io
|
web-scraping
| 1,930
|
[feature] add pluggable support for hrequests
|
This looks like a cool library, especially the browser TLS fingerprint replication
https://github.com/daijro/hrequests
> Website bot detection systems typically do not correlate the TLS fingerprint browser version with the browser header.
> By adding more randomization to our headers, we can make our requests appear to be coming from a larger number of clients. We can make it seem like our requests are coming from a larger number of clients. This makes it harder for websites to identify and block our requests based on a consistent browser version.
|
open
|
2023-11-03T10:19:24Z
|
2023-11-07T14:19:03Z
|
https://github.com/dgtlmoon/changedetection.io/issues/1930
|
[
"enhancement"
] |
dgtlmoon
| 1
|
ray-project/ray
|
deep-learning
| 51,481
|
CI test windows://python/ray/tests:test_args is consistently_failing
|
CI test **windows://python/ray/tests:test_args** is consistently_failing. Recent failures:
- https://buildkite.com/ray-project/postmerge/builds/8965#0195aad4-a541-45a9-b1ef-d27f9a1da383
- https://buildkite.com/ray-project/postmerge/builds/8965#0195aa03-5c4f-4168-a0da-6cbdc8cbd2df
DataCaseName-windows://python/ray/tests:test_args-END
Managed by OSS Test Policy
|
closed
|
2025-03-18T23:06:31Z
|
2025-03-19T21:52:15Z
|
https://github.com/ray-project/ray/issues/51481
|
[
"bug",
"triage",
"core",
"flaky-tracker",
"ray-test-bot",
"ci-test",
"weekly-release-blocker",
"stability"
] |
can-anyscale
| 2
|
autokey/autokey
|
automation
| 536
|
Update deprecated regex sequences
|
## Classification:
Maintenance
## Version
AutoKey version: develop 5963c7ceae668736e1b91809eff426a47ed5a507
## Summary
The CI tests are warning about several deprecated regex sequences. We should find alternatives and replace them.
The regex is one of the better-tested parts of the code in terms of unit tests, so we should be safe with what we change.
```
lib/autokey/model/key.py:122
/home/runner/work/autokey/autokey/lib/autokey/model/key.py:122: DeprecationWarning: invalid escape sequence \+
KEY_SPLIT_RE = re.compile("(<[^<>]+>\+?)")
lib/autokey/model/helpers.py:21
/home/runner/work/autokey/autokey/lib/autokey/model/helpers.py:21: DeprecationWarning: invalid escape sequence \w
DEFAULT_WORDCHAR_REGEX = '[\w]'
lib/autokey/model/abstract_abbreviation.py:143
/home/runner/work/autokey/autokey/lib/autokey/model/abstract_abbreviation.py:143: DeprecationWarning: invalid escape sequence \s
if len(stringBefore) > 0 and not re.match('(^\s)', stringBefore[-1]) and not self.triggerInside:
```
|
open
|
2021-04-26T21:36:06Z
|
2024-06-03T09:16:20Z
|
https://github.com/autokey/autokey/issues/536
|
[
"help-wanted",
"easy fix",
"development"
] |
BlueDrink9
| 0
|
littlecodersh/ItChat
|
api
| 49
|
itchat.create_chatroom创建群聊失败
|
print itchat.create_chatroom(itchat.get_contract(),"lll2l")
是空
|
closed
|
2016-07-27T02:07:32Z
|
2016-07-28T02:42:03Z
|
https://github.com/littlecodersh/ItChat/issues/49
|
[
"question"
] |
featheraaa
| 12
|
xinntao/Real-ESRGAN
|
pytorch
| 725
|
realesr-animevideov3 problem:“Could find no file with path 'out_frames/frame%08d.jpg' and index in the range 0-4 out_frames/frame%08d.jpg: No such file or directory”
|
“realesrgan-ncnn-vulkan.exe -i tmp_frames -o out_frames -n realesr-animevideov3 -s 2 -f jpg”
The source files are missing a few frames, for example, it jumps from frame02 to frame05. While enlarging the frames is not a problem, running the command for video synthesis will fail.
<img width="841" alt="Screenshot 2023-11-28 155531" src="https://github.com/xinntao/Real-ESRGAN/assets/129326754/5c07951f-d46f-4f4f-9084-0b81743706a4">
|
closed
|
2023-11-28T07:56:51Z
|
2023-11-28T17:54:45Z
|
https://github.com/xinntao/Real-ESRGAN/issues/725
|
[] |
joeyttt
| 1
|
matterport/Mask_RCNN
|
tensorflow
| 2,496
|
Accelerate Mask RCNN Inference
|
I want to do changes in DNN architecture in order to increase the FPS
I am using resnet50 as a backbone, I read this paper and tried to understand what are the changes that they did in their architecture.
https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8859360
Someone knows about way to increase the FPS of the net from 5 .
My GPU is Titan RTX
Thank you
|
open
|
2021-03-02T11:54:31Z
|
2021-03-02T11:54:31Z
|
https://github.com/matterport/Mask_RCNN/issues/2496
|
[] |
SaharGezer
| 0
|
TencentARC/GFPGAN
|
pytorch
| 517
|
Ujjal Deb
|

|
open
|
2024-02-16T09:23:46Z
|
2024-02-16T09:35:02Z
|
https://github.com/TencentARC/GFPGAN/issues/517
|
[] |
Ujjalvhai
| 2
|
pydantic/logfire
|
fastapi
| 876
|
LLM qualitative evaluations and labeling
|
### Description
It would be nice to have a place in the platform for this.
Another option would be to allow for integration with a partner that does provide it.
|
open
|
2025-02-19T20:20:40Z
|
2025-02-19T21:27:28Z
|
https://github.com/pydantic/logfire/issues/876
|
[
"Feature Request"
] |
Luca-Blight
| 2
|
ets-labs/python-dependency-injector
|
flask
| 33
|
Make Objects compatible with PyPy
|
Acceptance criterias:
- Tests on PyPy passed.
- Badge with supported version added to README.md
|
closed
|
2015-03-17T13:04:18Z
|
2015-03-26T15:29:22Z
|
https://github.com/ets-labs/python-dependency-injector/issues/33
|
[
"enhancement"
] |
rmk135
| 0
|
gunthercox/ChatterBot
|
machine-learning
| 2,145
|
I have following error in installing and running the chatterbot
|
C:\Users\Pushkar Tiwari>pip install chatterbot
Collecting chatterbot
Downloading ChatterBot-1.0.5-py2.py3-none-any.whl (67 kB)
|████████████████████████████████| 67 kB 1.3 MB/s
Collecting python-dateutil<2.8,>=2.7
Downloading python_dateutil-2.7.5-py2.py3-none-any.whl (225 kB)
|████████████████████████████████| 225 kB 3.3 MB/s
Collecting spacy<2.2,>=2.1
Downloading spacy-2.1.9.tar.gz (30.7 MB)
|████████████████████████████████| 30.7 MB 6.4 MB/s
Installing build dependencies ... error
ERROR: Command errored out with exit status 1:
command: 'C:\anaconda\python.exe' 'C:\anaconda\lib\site-packages\pip' install --ignore-installed --no-user --prefix 'C:\Users\Pushkar Tiwari\AppData\Local\Temp\pip-build-env-o1fy724g\overlay' --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- setuptools 'wheel>0.32.0,<0.33.0' Cython 'cymem>=2.0.2,<2.1.0' 'preshed>=2.0.1,<2.1.0' 'murmurhash>=0.28.0,<1.1.0' 'thinc>=7.0.8,<7.1.0'
cwd: None
Complete output (278 lines):
Collecting setuptools
Using cached setuptools-56.0.0-py3-none-any.whl (784 kB)
Collecting wheel<0.33.0,>0.32.0
Downloading wheel-0.32.3-py2.py3-none-any.whl (21 kB)
Collecting Cython
Downloading Cython-0.29.23-cp38-cp38-win_amd64.whl (1.7 MB)
Collecting cymem<2.1.0,>=2.0.2
Downloading cymem-2.0.5-cp38-cp38-win_amd64.whl (36 kB)
Collecting preshed<2.1.0,>=2.0.1
Downloading preshed-2.0.1.tar.gz (113 kB)
Collecting murmurhash<1.1.0,>=0.28.0
Downloading murmurhash-1.0.5-cp38-cp38-win_amd64.whl (21 kB)
Collecting thinc<7.1.0,>=7.0.8
Downloading thinc-7.0.8.tar.gz (1.9 MB)
Collecting blis<0.3.0,>=0.2.1
Downloading blis-0.2.4.tar.gz (1.5 MB)
Collecting wasabi<1.1.0,>=0.0.9
Using cached wasabi-0.8.2-py3-none-any.whl (23 kB)
Collecting srsly<1.1.0,>=0.0.6
Downloading srsly-1.0.5-cp38-cp38-win_amd64.whl (178 kB)
Collecting numpy>=1.7.0
Downloading numpy-1.20.2-cp38-cp38-win_amd64.whl (13.7 MB)
Collecting plac<1.0.0,>=0.9.6
Using cached plac-0.9.6-py2.py3-none-any.whl (20 kB)
Collecting tqdm<5.0.0,>=4.10.0
Downloading tqdm-4.60.0-py2.py3-none-any.whl (75 kB)
Building wheels for collected packages: preshed, thinc, blis
Building wheel for preshed (setup.py): started
Building wheel for preshed (setup.py): finished with status 'error'
ERROR: Command errored out with exit status 1:
command: 'C:\anaconda\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Pushkar Tiwari\\AppData\\Local\\Temp\\pip-install-23fnnvqn\\preshed\\setup.py'"'"'; __file__='"'"'C:\\Users\\Pushkar Tiwari\\AppData\\Local\\Temp\\pip-install-23fnnvqn\\preshed\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\Pushkar Tiwari\AppData\Local\Temp\pip-wheel-lm05x_1y'
cwd: C:\Users\Pushkar Tiwari\AppData\Local\Temp\pip-install-23fnnvqn\preshed\
Complete output (21 lines):
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-3.8
creating build\lib.win-amd64-3.8\preshed
copying preshed\about.py -> build\lib.win-amd64-3.8\preshed
copying preshed\__init__.py -> build\lib.win-amd64-3.8\preshed
creating build\lib.win-amd64-3.8\preshed\tests
copying preshed\tests\test_counter.py -> build\lib.win-amd64-3.8\preshed\tests
copying preshed\tests\test_hashing.py -> build\lib.win-amd64-3.8\preshed\tests
copying preshed\tests\test_pop.py -> build\lib.win-amd64-3.8\preshed\tests
copying preshed\tests\__init__.py -> build\lib.win-amd64-3.8\preshed\tests
copying preshed\counter.pyx -> build\lib.win-amd64-3.8\preshed
copying preshed\maps.pyx -> build\lib.win-amd64-3.8\preshed
copying preshed\counter.pxd -> build\lib.win-amd64-3.8\preshed
copying preshed\maps.pxd -> build\lib.win-amd64-3.8\preshed
copying preshed\__init__.pxd -> build\lib.win-amd64-3.8\preshed
running build_ext
building 'preshed.maps' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
----------------------------------------
ERROR: Failed building wheel for preshed
Running setup.py clean for preshed
Building wheel for thinc (setup.py): started
Building wheel for thinc (setup.py): finished with status 'error'
ERROR: Command errored out with exit status 1:
command: 'C:\anaconda\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Pushkar Tiwari\\AppData\\Local\\Temp\\pip-install-23fnnvqn\\thinc\\setup.py'"'"'; __file__='"'"'C:\\Users\\Pushkar Tiwari\\AppData\\Local\\Temp\\pip-install-23fnnvqn\\thinc\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\Pushkar Tiwari\AppData\Local\Temp\pip-wheel-mb62subz'
cwd: C:\Users\Pushkar Tiwari\AppData\Local\Temp\pip-install-23fnnvqn\thinc\
Complete output (166 lines):
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-3.8
creating build\lib.win-amd64-3.8\thinc
copying thinc\about.py -> build\lib.win-amd64-3.8\thinc
copying thinc\api.py -> build\lib.win-amd64-3.8\thinc
copying thinc\check.py -> build\lib.win-amd64-3.8\thinc
copying thinc\compat.py -> build\lib.win-amd64-3.8\thinc
copying thinc\describe.py -> build\lib.win-amd64-3.8\thinc
copying thinc\exceptions.py -> build\lib.win-amd64-3.8\thinc
copying thinc\i2v.py -> build\lib.win-amd64-3.8\thinc
copying thinc\loss.py -> build\lib.win-amd64-3.8\thinc
copying thinc\misc.py -> build\lib.win-amd64-3.8\thinc
copying thinc\rates.py -> build\lib.win-amd64-3.8\thinc
copying thinc\t2t.py -> build\lib.win-amd64-3.8\thinc
copying thinc\t2v.py -> build\lib.win-amd64-3.8\thinc
copying thinc\v2v.py -> build\lib.win-amd64-3.8\thinc
copying thinc\__init__.py -> build\lib.win-amd64-3.8\thinc
creating build\lib.win-amd64-3.8\thinc\tests
copying thinc\tests\conftest.py -> build\lib.win-amd64-3.8\thinc\tests
copying thinc\tests\strategies.py -> build\lib.win-amd64-3.8\thinc\tests
copying thinc\tests\test_api_funcs.py -> build\lib.win-amd64-3.8\thinc\tests
copying thinc\tests\test_util.py -> build\lib.win-amd64-3.8\thinc\tests
copying thinc\tests\util.py -> build\lib.win-amd64-3.8\thinc\tests
copying thinc\tests\__init__.py -> build\lib.win-amd64-3.8\thinc\tests
creating build\lib.win-amd64-3.8\thinc\tests\unit
copying thinc\tests\unit\test_about.py -> build\lib.win-amd64-3.8\thinc\tests\unit
copying thinc\tests\unit\test_affine.py -> build\lib.win-amd64-3.8\thinc\tests\unit
copying thinc\tests\unit\test_beam_search.py -> build\lib.win-amd64-3.8\thinc\tests\unit
copying thinc\tests\unit\test_check_exceptions.py -> build\lib.win-amd64-3.8\thinc\tests\unit
copying thinc\tests\unit\test_difference.py -> build\lib.win-amd64-3.8\thinc\tests\unit
copying thinc\tests\unit\test_feature_extracter.py -> build\lib.win-amd64-3.8\thinc\tests\unit
copying thinc\tests\unit\test_hash_embed.py -> build\lib.win-amd64-3.8\thinc\tests\unit
copying thinc\tests\unit\test_imports.py -> build\lib.win-amd64-3.8\thinc\tests\unit
copying thinc\tests\unit\test_linear.py -> build\lib.win-amd64-3.8\thinc\tests\unit
copying thinc\tests\unit\test_loss.py -> build\lib.win-amd64-3.8\thinc\tests\unit
copying thinc\tests\unit\test_mem.py -> build\lib.win-amd64-3.8\thinc\tests\unit
copying thinc\tests\unit\test_model.py -> build\lib.win-amd64-3.8\thinc\tests\unit
copying thinc\tests\unit\test_ops.py -> build\lib.win-amd64-3.8\thinc\tests\unit
copying thinc\tests\unit\test_pickle.py -> build\lib.win-amd64-3.8\thinc\tests\unit
copying thinc\tests\unit\test_pooling.py -> build\lib.win-amd64-3.8\thinc\tests\unit
copying thinc\tests\unit\test_pytorch_wrapper.py -> build\lib.win-amd64-3.8\thinc\tests\unit
copying thinc\tests\unit\test_rates.py -> build\lib.win-amd64-3.8\thinc\tests\unit
copying thinc\tests\unit\test_rnn.py -> build\lib.win-amd64-3.8\thinc\tests\unit
copying thinc\tests\unit\__init__.py -> build\lib.win-amd64-3.8\thinc\tests\unit
creating build\lib.win-amd64-3.8\thinc\tests\integration
copying thinc\tests\integration\test_affine_learns.py -> build\lib.win-amd64-3.8\thinc\tests\integration
copying thinc\tests\integration\test_basic_tagger.py -> build\lib.win-amd64-3.8\thinc\tests\integration
copying thinc\tests\integration\test_batch_norm.py -> build\lib.win-amd64-3.8\thinc\tests\integration
copying thinc\tests\integration\test_feed_forward.py -> build\lib.win-amd64-3.8\thinc\tests\integration
copying thinc\tests\integration\test_mnist.py -> build\lib.win-amd64-3.8\thinc\tests\integration
copying thinc\tests\integration\test_pickle.py -> build\lib.win-amd64-3.8\thinc\tests\integration
copying thinc\tests\integration\test_roundtrip_bytes.py -> build\lib.win-amd64-3.8\thinc\tests\integration
copying thinc\tests\integration\test_shape_check.py -> build\lib.win-amd64-3.8\thinc\tests\integration
copying thinc\tests\integration\__init__.py -> build\lib.win-amd64-3.8\thinc\tests\integration
creating build\lib.win-amd64-3.8\thinc\tests\linear
copying thinc\tests\linear\test_avgtron.py -> build\lib.win-amd64-3.8\thinc\tests\linear
copying thinc\tests\linear\test_linear.py -> build\lib.win-amd64-3.8\thinc\tests\linear
copying thinc\tests\linear\test_sparse_array.py -> build\lib.win-amd64-3.8\thinc\tests\linear
copying thinc\tests\linear\__init__.py -> build\lib.win-amd64-3.8\thinc\tests\linear
creating build\lib.win-amd64-3.8\thinc\linear
copying thinc\linear\__init__.py -> build\lib.win-amd64-3.8\thinc\linear
creating build\lib.win-amd64-3.8\thinc\neural
copying thinc\neural\mem.py -> build\lib.win-amd64-3.8\thinc\neural
copying thinc\neural\pooling.py -> build\lib.win-amd64-3.8\thinc\neural
copying thinc\neural\train.py -> build\lib.win-amd64-3.8\thinc\neural
copying thinc\neural\util.py -> build\lib.win-amd64-3.8\thinc\neural
copying thinc\neural\vec2vec.py -> build\lib.win-amd64-3.8\thinc\neural
copying thinc\neural\vecs2vec.py -> build\lib.win-amd64-3.8\thinc\neural
copying thinc\neural\vecs2vecs.py -> build\lib.win-amd64-3.8\thinc\neural
copying thinc\neural\_lsuv.py -> build\lib.win-amd64-3.8\thinc\neural
copying thinc\neural\__init__.py -> build\lib.win-amd64-3.8\thinc\neural
creating build\lib.win-amd64-3.8\thinc\extra
copying thinc\extra\datasets.py -> build\lib.win-amd64-3.8\thinc\extra
copying thinc\extra\hpbff.py -> build\lib.win-amd64-3.8\thinc\extra
copying thinc\extra\load_nlp.py -> build\lib.win-amd64-3.8\thinc\extra
copying thinc\extra\visualizer.py -> build\lib.win-amd64-3.8\thinc\extra
copying thinc\extra\wrappers.py -> build\lib.win-amd64-3.8\thinc\extra
copying thinc\extra\__init__.py -> build\lib.win-amd64-3.8\thinc\extra
creating build\lib.win-amd64-3.8\thinc\neural\_classes
copying thinc\neural\_classes\affine.py -> build\lib.win-amd64-3.8\thinc\neural\_classes
copying thinc\neural\_classes\attention.py -> build\lib.win-amd64-3.8\thinc\neural\_classes
copying thinc\neural\_classes\batchnorm.py -> build\lib.win-amd64-3.8\thinc\neural\_classes
copying thinc\neural\_classes\convolution.py -> build\lib.win-amd64-3.8\thinc\neural\_classes
copying thinc\neural\_classes\difference.py -> build\lib.win-amd64-3.8\thinc\neural\_classes
copying thinc\neural\_classes\elu.py -> build\lib.win-amd64-3.8\thinc\neural\_classes
copying thinc\neural\_classes\embed.py -> build\lib.win-amd64-3.8\thinc\neural\_classes
copying thinc\neural\_classes\encoder_decoder.py -> build\lib.win-amd64-3.8\thinc\neural\_classes
copying thinc\neural\_classes\feature_extracter.py -> build\lib.win-amd64-3.8\thinc\neural\_classes
copying thinc\neural\_classes\feed_forward.py -> build\lib.win-amd64-3.8\thinc\neural\_classes
copying thinc\neural\_classes\function_layer.py -> build\lib.win-amd64-3.8\thinc\neural\_classes
copying thinc\neural\_classes\hash_embed.py -> build\lib.win-amd64-3.8\thinc\neural\_classes
copying thinc\neural\_classes\layernorm.py -> build\lib.win-amd64-3.8\thinc\neural\_classes
copying thinc\neural\_classes\maxout.py -> build\lib.win-amd64-3.8\thinc\neural\_classes
copying thinc\neural\_classes\model.py -> build\lib.win-amd64-3.8\thinc\neural\_classes
copying thinc\neural\_classes\multiheaded_attention.py -> build\lib.win-amd64-3.8\thinc\neural\_classes
copying thinc\neural\_classes\relu.py -> build\lib.win-amd64-3.8\thinc\neural\_classes
copying thinc\neural\_classes\resnet.py -> build\lib.win-amd64-3.8\thinc\neural\_classes
copying thinc\neural\_classes\rnn.py -> build\lib.win-amd64-3.8\thinc\neural\_classes
copying thinc\neural\_classes\selu.py -> build\lib.win-amd64-3.8\thinc\neural\_classes
copying thinc\neural\_classes\softmax.py -> build\lib.win-amd64-3.8\thinc\neural\_classes
copying thinc\neural\_classes\static_vectors.py -> build\lib.win-amd64-3.8\thinc\neural\_classes
copying thinc\neural\_classes\__init__.py -> build\lib.win-amd64-3.8\thinc\neural\_classes
creating build\lib.win-amd64-3.8\thinc\extra\_vendorized
copying thinc\extra\_vendorized\keras_datasets.py -> build\lib.win-amd64-3.8\thinc\extra\_vendorized
copying thinc\extra\_vendorized\keras_data_utils.py -> build\lib.win-amd64-3.8\thinc\extra\_vendorized
copying thinc\extra\_vendorized\keras_generic_utils.py -> build\lib.win-amd64-3.8\thinc\extra\_vendorized
copying thinc\extra\_vendorized\__init__.py -> build\lib.win-amd64-3.8\thinc\extra\_vendorized
creating build\lib.win-amd64-3.8\thinc\extra\wrapt
copying thinc\extra\wrapt\decorators.py -> build\lib.win-amd64-3.8\thinc\extra\wrapt
copying thinc\extra\wrapt\importer.py -> build\lib.win-amd64-3.8\thinc\extra\wrapt
copying thinc\extra\wrapt\wrappers.py -> build\lib.win-amd64-3.8\thinc\extra\wrapt
copying thinc\extra\wrapt\__init__.py -> build\lib.win-amd64-3.8\thinc\extra\wrapt
copying thinc\linalg.pyx -> build\lib.win-amd64-3.8\thinc
copying thinc\structs.pyx -> build\lib.win-amd64-3.8\thinc
copying thinc\typedefs.pyx -> build\lib.win-amd64-3.8\thinc
copying thinc\cpu.pxd -> build\lib.win-amd64-3.8\thinc
copying thinc\linalg.pxd -> build\lib.win-amd64-3.8\thinc
copying thinc\structs.pxd -> build\lib.win-amd64-3.8\thinc
copying thinc\typedefs.pxd -> build\lib.win-amd64-3.8\thinc
copying thinc\__init__.pxd -> build\lib.win-amd64-3.8\thinc
copying thinc\compile_time_constants.pxi -> build\lib.win-amd64-3.8\thinc
copying thinc\linalg.cpp -> build\lib.win-amd64-3.8\thinc
copying thinc\structs.cpp -> build\lib.win-amd64-3.8\thinc
copying thinc\typedefs.cpp -> build\lib.win-amd64-3.8\thinc
copying thinc\linear\avgtron.pyx -> build\lib.win-amd64-3.8\thinc\linear
copying thinc\linear\features.pyx -> build\lib.win-amd64-3.8\thinc\linear
copying thinc\linear\linear.pyx -> build\lib.win-amd64-3.8\thinc\linear
copying thinc\linear\serialize.pyx -> build\lib.win-amd64-3.8\thinc\linear
copying thinc\linear\sparse.pyx -> build\lib.win-amd64-3.8\thinc\linear
copying thinc\linear\avgtron.pxd -> build\lib.win-amd64-3.8\thinc\linear
copying thinc\linear\features.pxd -> build\lib.win-amd64-3.8\thinc\linear
copying thinc\linear\serialize.pxd -> build\lib.win-amd64-3.8\thinc\linear
copying thinc\linear\sparse.pxd -> build\lib.win-amd64-3.8\thinc\linear
copying thinc\linear\__init__.pxd -> build\lib.win-amd64-3.8\thinc\linear
copying thinc\linear\avgtron.cpp -> build\lib.win-amd64-3.8\thinc\linear
copying thinc\linear\features.cpp -> build\lib.win-amd64-3.8\thinc\linear
copying thinc\linear\linear.cpp -> build\lib.win-amd64-3.8\thinc\linear
copying thinc\linear\serialize.cpp -> build\lib.win-amd64-3.8\thinc\linear
copying thinc\linear\sparse.cpp -> build\lib.win-amd64-3.8\thinc\linear
copying thinc\neural\ops.pyx -> build\lib.win-amd64-3.8\thinc\neural
copying thinc\neural\optimizers.pyx -> build\lib.win-amd64-3.8\thinc\neural
copying thinc\neural\_aligned_alloc.pyx -> build\lib.win-amd64-3.8\thinc\neural
copying thinc\neural\cpu.pxd -> build\lib.win-amd64-3.8\thinc\neural
copying thinc\neural\ops.pxd -> build\lib.win-amd64-3.8\thinc\neural
copying thinc\neural\__init__.pxd -> build\lib.win-amd64-3.8\thinc\neural
copying thinc\neural\ops.cpp -> build\lib.win-amd64-3.8\thinc\neural
copying thinc\neural\optimizers.cpp -> build\lib.win-amd64-3.8\thinc\neural
copying thinc\neural\_aligned_alloc.cpp -> build\lib.win-amd64-3.8\thinc\neural
copying thinc\extra\cache.pyx -> build\lib.win-amd64-3.8\thinc\extra
copying thinc\extra\eg.pyx -> build\lib.win-amd64-3.8\thinc\extra
copying thinc\extra\mb.pyx -> build\lib.win-amd64-3.8\thinc\extra
copying thinc\extra\search.pyx -> build\lib.win-amd64-3.8\thinc\extra
copying thinc\extra\cache.pxd -> build\lib.win-amd64-3.8\thinc\extra
copying thinc\extra\eg.pxd -> build\lib.win-amd64-3.8\thinc\extra
copying thinc\extra\mb.pxd -> build\lib.win-amd64-3.8\thinc\extra
copying thinc\extra\search.pxd -> build\lib.win-amd64-3.8\thinc\extra
copying thinc\extra\__init__.pxd -> build\lib.win-amd64-3.8\thinc\extra
copying thinc\extra\cache.cpp -> build\lib.win-amd64-3.8\thinc\extra
copying thinc\extra\eg.cpp -> build\lib.win-amd64-3.8\thinc\extra
copying thinc\extra\mb.cpp -> build\lib.win-amd64-3.8\thinc\extra
copying thinc\extra\search.cpp -> build\lib.win-amd64-3.8\thinc\extra
running build_ext
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
----------------------------------------
ERROR: Failed building wheel for thinc
Running setup.py clean for thinc
Building wheel for blis (setup.py): started
Building wheel for blis (setup.py): finished with status 'error'
ERROR: Command errored out with exit status 1:
command: 'C:\anaconda\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Pushkar Tiwari\\AppData\\Local\\Temp\\pip-install-23fnnvqn\\blis\\setup.py'"'"'; __file__='"'"'C:\\Users\\Pushkar Tiwari\\AppData\\Local\\Temp\\pip-install-23fnnvqn\\blis\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\Pushkar Tiwari\AppData\Local\Temp\pip-wheel-9_ck_hy_'
cwd: C:\Users\Pushkar Tiwari\AppData\Local\Temp\pip-install-23fnnvqn\blis\
Complete output (21 lines):
BLIS_COMPILER? None
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-3.8
creating build\lib.win-amd64-3.8\blis
copying blis\about.py -> build\lib.win-amd64-3.8\blis
copying blis\benchmark.py -> build\lib.win-amd64-3.8\blis
copying blis\__init__.py -> build\lib.win-amd64-3.8\blis
creating build\lib.win-amd64-3.8\blis\tests
copying blis\tests\common.py -> build\lib.win-amd64-3.8\blis\tests
copying blis\tests\test_dotv.py -> build\lib.win-amd64-3.8\blis\tests
copying blis\tests\test_gemm.py -> build\lib.win-amd64-3.8\blis\tests
copying blis\tests\__init__.py -> build\lib.win-amd64-3.8\blis\tests
copying blis\cy.pyx -> build\lib.win-amd64-3.8\blis
copying blis\py.pyx -> build\lib.win-amd64-3.8\blis
copying blis\cy.pxd -> build\lib.win-amd64-3.8\blis
copying blis\__init__.pxd -> build\lib.win-amd64-3.8\blis
running build_ext
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
----------------------------------------
ERROR: Failed building wheel for blis
Running setup.py clean for blis
Failed to build preshed thinc blis
Installing collected packages: setuptools, wheel, Cython, cymem, preshed, murmurhash, numpy, blis, wasabi, srsly, plac, tqdm, thinc
Running setup.py install for preshed: started
Running setup.py install for preshed: finished with status 'error'
ERROR: Command errored out with exit status 1:
command: 'C:\anaconda\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Pushkar Tiwari\\AppData\\Local\\Temp\\pip-install-23fnnvqn\\preshed\\setup.py'"'"'; __file__='"'"'C:\\Users\\Pushkar Tiwari\\AppData\\Local\\Temp\\pip-install-23fnnvqn\\preshed\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\Pushkar Tiwari\AppData\Local\Temp\pip-record-pvumkoqq\install-record.txt' --single-version-externally-managed --prefix 'C:\Users\Pushkar Tiwari\AppData\Local\Temp\pip-build-env-o1fy724g\overlay' --compile --install-headers 'C:\Users\Pushkar Tiwari\AppData\Local\Temp\pip-build-env-o1fy724g\overlay\Include\preshed'
cwd: C:\Users\Pushkar Tiwari\AppData\Local\Temp\pip-install-23fnnvqn\preshed\
Complete output (6 lines):
running install
running build
running build_py
running build_ext
building 'preshed.maps' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
----------------------------------------
ERROR: Command errored out with exit status 1: 'C:\anaconda\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Pushkar Tiwari\\AppData\\Local\\Temp\\pip-install-23fnnvqn\\preshed\\setup.py'"'"'; __file__='"'"'C:\\Users\\Pushkar Tiwari\\AppData\\Local\\Temp\\pip-install-23fnnvqn\\preshed\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\Pushkar Tiwari\AppData\Local\Temp\pip-record-pvumkoqq\install-record.txt' --single-version-externally-managed --prefix 'C:\Users\Pushkar Tiwari\AppData\Local\Temp\pip-build-env-o1fy724g\overlay' --compile --install-headers 'C:\Users\Pushkar Tiwari\AppData\Local\Temp\pip-build-env-o1fy724g\overlay\Include\preshed' Check the logs for full command output.
----------------------------------------
ERROR: Command errored out with exit status 1: 'C:\anaconda\python.exe' 'C:\anaconda\lib\site-packages\pip' install --ignore-installed --no-user --prefix 'C:\Users\Pushkar Tiwari\AppData\Local\Temp\pip-build-env-o1fy724g\overlay' --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- setuptools 'wheel>0.32.0,<0.33.0' Cython 'cymem>=2.0.2,<2.1.0' 'preshed>=2.0.1,<2.1.0' 'murmurhash>=0.28.0,<1.1.0' 'thinc>=7.0.8,<7.1.0' Check the logs for full command output.
|
closed
|
2021-04-16T16:03:41Z
|
2025-02-17T21:42:55Z
|
https://github.com/gunthercox/ChatterBot/issues/2145
|
[] |
pushkar201851095
| 0
|
flairNLP/flair
|
nlp
| 3,506
|
[Bug]: error when training using mps
|
### Describe the bug
When setting `flair.device` to `mps`, the following error is thrown during training:
```
RuntimeError: User specified an unsupported autocast device_type 'mps'
```
### To Reproduce
```python
flair.device = torch.device("mps")
... build and train model
```
### Expected behavior
Torch's mps support should be usable via flair.
### Logs and Stack traces
_No response_
### Screenshots
_No response_
### Additional Context
_No response_
### Environment
#### Versions:
##### Flair
0.13.1
##### Pytorch
2.3.1
##### Transformers
4.42.4
#### GPU
False
|
open
|
2024-07-23T20:42:27Z
|
2024-08-03T19:12:58Z
|
https://github.com/flairNLP/flair/issues/3506
|
[
"bug"
] |
joprice
| 1
|
aimhubio/aim
|
data-visualization
| 3,193
|
Add Relative Time To Table View in Metrics Panel
|
## 🚀 Feature
The table view on the metrics page should support seeing relative time.
### Motivation
The chart can be aligned by step and it's useful to see the the relative time in table below.
### Pitch
When mousing over the chart the relative time like value and step should stay in sync with where you are pointing at in the chart.
### Alternatives
You can change the x-axis alignment from step to relative time. It's a bit cumbersome to keep going back and forth between different x axis.
### Additional context
I've implemented this locally and it looks like this.

|
open
|
2024-07-23T00:03:47Z
|
2024-07-23T00:03:47Z
|
https://github.com/aimhubio/aim/issues/3193
|
[
"type / enhancement"
] |
vinayan3
| 0
|
plotly/dash
|
dash
| 2,986
|
[REGRESSION] Number of no_output must match number of outputs in dash==2.18.0
|
In `dash==2.18.0`, the number of `no_update ` returned must match the number of outputs. This is not true in earlier versions of Dash.
This can be resolved by making `no_update` match but is a regression.
Consider:
```python
import dash
from dash import html, dcc, Input, Output, no_update
# Initialize the Dash app
app = dash.Dash(__name__)
# Define the layout
app.layout = html.Div(
[
dcc.Input(id="input-box", type="text", value=""),
html.Button("Submit", id="button"),
html.Div(id="output-1", children="Output 1 will be displayed here"),
html.Div(id="output-2", children="Output 2 will be displayed here"),
]
)
# Callback with two outputs
@app.callback(
Output("output-1", "children"),
Output("output-2", "children"),
Input("button", "n_clicks"),
Input("input-box", "value"),
)
def update_outputs(n_clicks, value):
if n_clicks is None:
return no_update
return "Hello", "world!"
# Run the app
if __name__ == "__main__":
app.run_server(debug=True)
```
Dash 2.18

Dash 2.17

|
closed
|
2024-09-05T17:39:04Z
|
2024-09-17T13:53:23Z
|
https://github.com/plotly/dash/issues/2986
|
[
"regression",
"bug",
"P1"
] |
ndrezn
| 4
|
plotly/dash
|
dash
| 2,263
|
[BUG] Dash Pages does not support relative package imports within page modules
|
Env:
```
dash 2.6.2
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
**Describe the bug**
I typically make my Dash apps as Python packages, using relative imports within my app. I've just discovered that Dash Pages is not happy with relative imports being used within your page modules (eg `pages/page1.py`). As far as I can tell, this is because Dash manually loads the contents of `pages` as modules using `exec_module`, but, if your app is inside a package, it does not attach the page modules to the package. So when you try to access a `utils.py` in the base of your package, eg with `from ..utils import blah` within `pages/page1.py`, you'll get:
ImportError: attempted relative import beyond top-level package
So effectively, every module within your `pages` directory is orphaned from the package, and you won't be able to use relative imports to share utility functions across your page modules and the rest of your app.
**Minimal example**
`__init__.py` (empty)
`app.py`
```python
import dash
app = dash.Dash(__name__, use_pages=True)
app.layout = dash.html.Div(["main page", dash.page_container])
app.run_server(debug=True)
```
`pages/__init__.py` (empty)
`pages/page.py`
```python
import dash
from ..utils import blah
dash.register_page(__name__, path="/page")
layout = dash.html.Div("page")
```
`utils.py`
```python
def blah():
pass
```
Runnning `python -m dash_pages_bug.app` from the directory above results in:
ImportError: attempted relative import beyond top-level package
|
closed
|
2022-10-08T06:45:54Z
|
2023-03-15T22:27:44Z
|
https://github.com/plotly/dash/issues/2263
|
[] |
ned2
| 3
|
moshi4/pyCirclize
|
matplotlib
| 19
|
Legend won't load
|
Hi @moshi4 ,
Thanks for making such an amazing tool, pyCirclize is great and very simple to use.
I am trying to plot a circular phage genome so I can add it as an option in my program [pharokka](https://github.com/gbouras13/pharokka/issues/230), mostly following your tutorials in the docs, but the legend won't load no matter what I try. Any ideas what could be causing the issue?
I'm using v 0.3.1.
The code is:
```
from pycirclize import Circos
from pycirclize.parser import Gff
from pycirclize.parser import Genbank
from matplotlib.patches import Patch
from matplotlib.lines import Line2D
import os
import numpy as np
# Load GFF file
def create_plot(out_dir, prefix, plot_name):
# read in gff file
gff_file = os.path.join(out_dir, prefix + ".gff")
gff = Gff(gff_file)
# Load Genbank file
gbk_file = os.path.join(out_dir, prefix + ".gbk")
# get only to range of gff - as by default gbk takes all contigs, gff only the first
gbk = Genbank(gbk_file, max_range = gff.range_size)
# instantiate circos
circos = Circos(sectors={gbk.name: gbk.range_size})
#circos.text(plot_name, size=16)
sector = circos.get_sector(gbk.name)
cds_track = sector.add_track((90, 100))
cds_track.axis(fc="#EEEEEE", ec="none")
# Plot forward CDS
cds_track.genomic_features(
gff.extract_features("CDS", target_strand=1),
plotstyle="arrow",
r_lim=(95, 100),
fc="salmon",
)
# Plot reverse CDS
cds_track.genomic_features(
gff.extract_features("CDS", target_strand=-1),
plotstyle="arrow",
r_lim=(90, 95),
fc="skyblue",
)
# Extract CDS product labels
pos_list, labels = [], []
for f in gff.extract_features("CDS"):
start, end = int(str(f.location.end)), int(str(f.location.start))
pos = (start + end) / 2
label = f.qualifiers.get("product", [""])[0]
if label == "" or label.startswith("hypothetical") or label.startswith("unknown") :
continue
if len(label) > 25:
label = label[:25] + "..."
pos_list.append(pos)
labels.append(label)
# Plot CDS product labels on outer position
cds_track.xticks(
pos_list,
labels,
label_orientation="vertical",
show_bottom_line=True,
label_size=7,
line_kws=dict(ec="grey"),
)
# Plot GC content
gc_content_track = sector.add_track((50, 65))
pos_list, gc_contents = gbk.calc_gc_content()
gc_contents = gc_contents - gbk.calc_genome_gc_content()
positive_gc_contents = np.where(gc_contents > 0, gc_contents, 0)
negative_gc_contents = np.where(gc_contents < 0, gc_contents, 0)
abs_max_gc_content = np.max(np.abs(gc_contents))
vmin, vmax = -abs_max_gc_content, abs_max_gc_content
gc_content_track.fill_between(
pos_list, positive_gc_contents, 0, vmin=vmin, vmax=vmax, color="black"
)
gc_content_track.fill_between(
pos_list, negative_gc_contents, 0, vmin=vmin, vmax=vmax, color="grey"
)
# Plot GC skew
gc_skew_track = sector.add_track((35, 50))
pos_list, gc_skews = gbk.calc_gc_skew()
positive_gc_skews = np.where(gc_skews > 0, gc_skews, 0)
negative_gc_skews = np.where(gc_skews < 0, gc_skews, 0)
abs_max_gc_skew = np.max(np.abs(gc_skews))
vmin, vmax = -abs_max_gc_skew, abs_max_gc_skew
gc_skew_track.fill_between(
pos_list, positive_gc_skews, 0, vmin=vmin, vmax=vmax, color="olive"
)
gc_skew_track.fill_between(
pos_list, negative_gc_skews, 0, vmin=vmin, vmax=vmax, color="purple"
)
# Plot xticks & intervals on inner position
cds_track.xticks_by_interval(
interval=5000,
outer=False,
show_bottom_line=True,
label_formatter=lambda v: f"{v/ 1000:.1f} Kb",
label_orientation="vertical",
line_kws=dict(ec="grey"),
)
# # Add legend
handle = [
Patch(color="skyblue", label="Forward CDS"),
Patch(color="salmon", label="Reverse CDS"),
Line2D([], [], color="black", label="Positive GC Content", marker="^", ms=6, ls="None"),
Line2D([], [], color="grey", label="Negative GC Content", marker="v", ms=6, ls="None"),
Line2D([], [], color="olive", label="Positive GC Skew", marker="^", ms=6, ls="None"),
Line2D([], [], color="purple", label="Negative GC Skew", marker="v", ms=6, ls="None")
]
fig = circos.plotfig()
_ = circos.ax.legend(handles=handle,
bbox_to_anchor=(0.9, 0.475),
fontsize=8)
# Add legend
circos.savefig(savefile = os.path.join(out_dir, "pharokka_plot.png"), dpi = 600)
```

I've attached an example plot too.
George
|
closed
|
2023-04-10T12:48:42Z
|
2024-05-03T02:36:32Z
|
https://github.com/moshi4/pyCirclize/issues/19
|
[
"question"
] |
gbouras13
| 2
|
mherrmann/helium
|
web-scraping
| 14
|
StaleElementReferenceException when matching elements exist in both iframe and main page
|
Hi, I have an SPA that I'm trying to do some automated testing with
helium works fine when the elements are in the main page, but the app uses AJAX to load some content into an iframe, so when I try to find those elements to click on, I get an exception
the following demonstrates the issue:
- I load the main page in the browser
- the main page of course has a body element
- the main page also contains an iframe element
- the iframe element contains another body element
- find_all looking for any body elements fails
- if I drop down into selenium, and manually switch between iframe and main body, I can resolve the body element correctly
- I think there is a bug when querying the element wrapper's web_element attributes when the selenium driver is switched to the wrong frame?
```
$ python3
Python 3.7.4 (default, Oct 21 2019, 15:59:56)
[Clang 8.0.0 (clang-800.0.42.1)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from helium import *
>>> driver=start_firefox()
>>> driver.current_url
'about:blank'
>>> go_to('http://loki.local')
>>> driver.current_url
'http://loki.local/'
>>> find_all(Text('Skip'))
[<div class="cursorPointer carouselSkip skipPurpleText" data-bind="text: tr.MSG_Skip, click: onSkip, css: skipAnimationCss, aria: {role:'button', tabIndex: 3, label: tr.MSG_SkipOnboardingInfo}">Skip</div>]
>>> find_all(S('span'))
[<span class="carouselPageTitleText" data-bind="text: title">Welcome!</span>]
>>> driver.switch_to.active_element
<selenium.webdriver.firefox.webelement.FirefoxWebElement (session="3efdb9c8-c699-7542-acdc-e4fd291ecad9", element="dfd8926b-98f8-b547-ac3a-46bb38ccb389")>
>>> driver.find_elements_by_tag_name('body')
[<selenium.webdriver.firefox.webelement.FirefoxWebElement (session="3efdb9c8-c699-7542-acdc-e4fd291ecad9", element="dfd8926b-98f8-b547-ac3a-46bb38ccb389")>]
>>> driver.find_elements_by_tag_name('iframe')
[<selenium.webdriver.firefox.webelement.FirefoxWebElement (session="3efdb9c8-c699-7542-acdc-e4fd291ecad9", element="bd9232e8-2444-ca4b-b64e-256d579c96b4")>]
>>> find_all(S('iframe'))
[<iframe id="appTepIFrame" name="appTepIFrame" allowfullscreen="" mozallowfullscreen="" webkitallowfullscreen="" oallowfullscreen="" msallowfullscreen="" data-bind="attr: {src: activeTepUrl, tabIndex: getTabIndex()}" tabindex="-1" src="http://loki.local/src/tep.html" width="100%" height="100%" frameborder="0"></iframe>]
>>> iframe=_[0]
>>> iframe.web_element
<selenium.webdriver.firefox.webelement.FirefoxWebElement (session="3efdb9c8-c699-7542-acdc-e4fd291ecad9", element="bd9232e8-2444-ca4b-b64e-256d579c96b4")>
>>> find_all(S('body'))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/fulcrum/src/te-automation/venv/lib/python3.7/site-packages/helium/__init__.py", line 584, in __repr__
element_html = self.web_element.get_attribute('outerHTML')
File "/Users/fulcrum/src/te-automation/venv/lib/python3.7/site-packages/selenium/webdriver/remote/webelement.py", line 141, in get_attribute
self, name)
File "/Users/fulcrum/src/te-automation/venv/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py", line 636, in execute_script
'args': converted_args})['value']
File "/Users/fulcrum/src/te-automation/venv/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "/Users/fulcrum/src/te-automation/venv/lib/python3.7/site-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.StaleElementReferenceException: Message: The element reference of <body> is stale; either the element is no longer attached to the DOM, it is not in the current frame context, or the document has been refreshed
>>> driver.switch_to.default_content()
>>> iframe.web_element.get_attribute('src')
'http://loki.local/src/tep.html'
>>> driver.switch_to.frame(0)
>>> iframe.web_element.get_attribute('src')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/fulcrum/src/te-automation/venv/lib/python3.7/site-packages/selenium/webdriver/remote/webelement.py", line 141, in get_attribute
self, name)
File "/Users/fulcrum/src/te-automation/venv/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py", line 636, in execute_script
'args': converted_args})['value']
File "/Users/fulcrum/src/te-automation/venv/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "/Users/fulcrum/src/te-automation/venv/lib/python3.7/site-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.StaleElementReferenceException: Message: The element reference of <iframe id="appTepIFrame" name="appTepIFrame" src="http://loki.local/src/tep.html"> is stale; either the element is no longer attached to the DOM, it is not in the current frame context, or the document has been refreshed
```
|
open
|
2020-03-26T22:11:10Z
|
2022-06-14T13:02:38Z
|
https://github.com/mherrmann/helium/issues/14
|
[] |
thoughtextreme
| 5
|
microsoft/hummingbird
|
scikit-learn
| 515
|
Error with inputs on a model converted onnx->torch
|
For [this](https://github.com/microsoft/hummingbird/tree/kasaur/onnx_cols_prob/tests/errorcode) model and data, a user reported the following error when converting their model from onnx to torch:
```python
import onnx
from hummingbird.ml import convert
from hummingbird.ml import constants
import pandas as pd
m = "/root/hummingbird/tests/errorcode/mymodel.onnx"
csvstr="/root/hummingbird/tests/errorcode/X.csv"
csv=pd.read_csv(csvstr)
with open(m, 'rb') as binary_file:
modstr = binary_file.read()
hbout = convert( onnx.load_from_string(modstr),'torch', csv)
# this will give the error below
hbout.predict(csv)
```
```bash
Traceback (most recent call last):
File "/root/hummingbird/tests/errorcode/colserror.py", line 16, in <module>
hbout.predict(csv)
File "/root/hummingbird/hummingbird/ml/containers/_sklearn_api_containers.py", line 119, in predict
return self._run(self._predict, *inputs)
File "/root/hummingbird/hummingbird/ml/containers/_sklearn_api_containers.py", line 67, in _run
return function(*inputs)
File "/root/hummingbird/hummingbird/ml/containers/sklearn/pytorch_containers.py", line 192, in _predict
return self.model.forward(*inputs)[0].cpu().numpy().ravel()
File "/root/hummingbird/hummingbird/ml/_executor.py", line 73, in forward
self._input_names
AssertionError: number of inputs or number of columns in the dataframe do not match with the expected number of inputs ['loc', 'v_g_', 'ev_g_', 'iv_g_', 'n', 'v', 'l', 'd', 'i', 'e', 'b', 't', 'lOCode', 'lOComment', 'lOBlank', 'lOCodeAndComment', 'uniq_Op', 'total_Op', 'total_Opnd', 'branchCount']
```
|
closed
|
2021-05-20T19:30:03Z
|
2021-05-20T20:41:19Z
|
https://github.com/microsoft/hummingbird/issues/515
|
[] |
ksaur
| 3
|
litestar-org/litestar
|
asyncio
| 3,090
|
Bug: session middleware cookies always include segment number
|
### Description
The docstring of [litestar.middleware.session.client_side.CookieBackendConfig](https://github.com/litestar-org/litestar/blob/04a9501d719c26ff4ef8451d216707a1b72a1c77/litestar/middleware/session/client_side.py#L209) suggests that key for the cookie inside the header is of the form `session=<data>`, unless the cookie is over 4KB in which case it's split into segments named e.g. `session-0=<data-0>`, etc.
However, even when the cookie is under 4KB, the segment number is present in the key, i.e. it's impossible to specify that the key be named `session` -- it will always turn into `session-0`. The reason for this is that
`litestar.middleware.session.client_side.ClientSideSessionBackend._create_session_cookies` returns the result of the list comprehension which includes `[Cookie(..., key=f"{self.config.key}-{i}", ...) for i, datum in enumerate(data)]`.
### URL to code causing the issue
_No response_
### MCVE
_No response_
### Steps to reproduce
_No response_
### Screenshots
_No response_
### Logs
_No response_
### Litestar Version
2.6.0
### Platform
- [X] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above)
|
closed
|
2024-02-09T08:42:18Z
|
2025-03-20T15:54:25Z
|
https://github.com/litestar-org/litestar/issues/3090
|
[
"Bug :bug:"
] |
betaprior
| 2
|
dask/dask
|
scikit-learn
| 11,612
|
bug: group-by with the same root name and different output names raises
|
<!-- Please include a self-contained copy-pastable example that generates the issue if possible.
Please be concise with code posted. See guidelines below on how to provide a good bug report:
- Craft Minimal Bug Reports http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports
- Minimal Complete Verifiable Examples https://stackoverflow.com/help/mcve
Bug reports that follow these guidelines are easier to diagnose, and so are often handled much more quickly.
-->
**Describe the issue**:
**Minimal Complete Verifiable Example**:
check this out
```python
In [1]: import dask.dataframe as dd
In [2]: import pandas as pd
In [3]: df = pd.DataFrame({'a': [1,1,2], 'b': [4,5,6]})
In [4]: df.groupby('a').agg(c=('b', 'mean'), d=('b', 'mean'))
Out[4]:
c d
a
1 4.5 4.5
2 6.0 6.0
In [5]: dd.from_pandas(df).groupby('a').agg(c=('b', 'mean'), d=('b', 'mean'))
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[5], line 1
----> 1 dd.from_pandas(df).groupby('a').agg(c=('b', 'mean'), d=('b', 'mean'))
File ~/polars-api-compat-dev/.venv/lib/python3.12/site-packages/dask_expr/_groupby.py:1955, in GroupBy.agg(self, *args, **kwargs)
1954 def agg(self, *args, **kwargs):
-> 1955 return self.aggregate(*args, **kwargs)
File ~/polars-api-compat-dev/.venv/lib/python3.12/site-packages/dask_expr/_groupby.py:1934, in GroupBy.aggregate(self, arg, split_every, split_out, shuffle_method, **kwargs)
1931 if arg == "size":
1932 return self.size()
-> 1934 result = new_collection(
1935 GroupbyAggregation(
1936 self.obj.expr,
1937 arg,
1938 self.observed,
1939 self.dropna,
1940 split_every,
1941 split_out,
1942 self.sort,
1943 shuffle_method,
1944 self._slice,
1945 *self.by,
1946 )
1947 )
1948 if relabeling and result is not None:
1949 if order is not None:
File ~/polars-api-compat-dev/.venv/lib/python3.12/site-packages/dask_expr/_collection.py:4835, in new_collection(expr)
4833 def new_collection(expr):
4834 """Create new collection from an expr"""
-> 4835 meta = expr._meta
4836 expr._name # Ensure backend is imported
4837 return get_collection_type(meta)(expr)
File ~/.local/share/uv/python/cpython-3.12.6-linux-x86_64-gnu/lib/python3.12/functools.py:993, in cached_property.__get__(self, instance, owner)
991 val = cache.get(self.attrname, _NOT_FOUND)
992 if val is _NOT_FOUND:
--> 993 val = self.func(instance)
994 try:
995 cache[self.attrname] = val
File ~/polars-api-compat-dev/.venv/lib/python3.12/site-packages/dask_expr/_groupby.py:439, in GroupbyAggregation._meta(self)
437 @functools.cached_property
438 def _meta(self):
--> 439 return self._lower()._meta
File ~/.local/share/uv/python/cpython-3.12.6-linux-x86_64-gnu/lib/python3.12/functools.py:993, in cached_property.__get__(self, instance, owner)
991 val = cache.get(self.attrname, _NOT_FOUND)
992 if val is _NOT_FOUND:
--> 993 val = self.func(instance)
994 try:
995 cache[self.attrname] = val
File ~/polars-api-compat-dev/.venv/lib/python3.12/site-packages/dask_expr/_reductions.py:440, in ApplyConcatApply._meta(self)
438 @functools.cached_property
439 def _meta(self):
--> 440 meta = self._meta_chunk
441 aggregate = self.aggregate or (lambda x: x)
442 if self.combine:
File ~/.local/share/uv/python/cpython-3.12.6-linux-x86_64-gnu/lib/python3.12/functools.py:993, in cached_property.__get__(self, instance, owner)
991 val = cache.get(self.attrname, _NOT_FOUND)
992 if val is _NOT_FOUND:
--> 993 val = self.func(instance)
994 try:
995 cache[self.attrname] = val
File ~/polars-api-compat-dev/.venv/lib/python3.12/site-packages/dask_expr/_groupby.py:213, in GroupByApplyConcatApply._meta_chunk(self)
210 @functools.cached_property
211 def _meta_chunk(self):
212 meta = meta_nonempty(self.frame._meta)
--> 213 return self.chunk(meta, *self._by_meta, **self.chunk_kwargs)
File ~/polars-api-compat-dev/.venv/lib/python3.12/site-packages/dask_expr/_groupby.py:530, in DecomposableGroupbyAggregation.chunk_kwargs(self)
527 @property
528 def chunk_kwargs(self) -> dict:
529 return {
--> 530 "funcs": self.agg_args["chunk_funcs"],
531 "sort": self.sort,
532 **_as_dict("observed", self.observed),
533 **_as_dict("dropna", self.dropna),
534 }
File ~/.local/share/uv/python/cpython-3.12.6-linux-x86_64-gnu/lib/python3.12/functools.py:993, in cached_property.__get__(self, instance, owner)
991 val = cache.get(self.attrname, _NOT_FOUND)
992 if val is _NOT_FOUND:
--> 993 val = self.func(instance)
994 try:
995 cache[self.attrname] = val
File ~/polars-api-compat-dev/.venv/lib/python3.12/site-packages/dask_expr/_groupby.py:411, in GroupbyAggregationBase.agg_args(self)
408 @functools.cached_property
409 def agg_args(self):
410 keys = ["chunk_funcs", "aggregate_funcs", "finalizers"]
--> 411 return dict(zip(keys, _build_agg_args(self.spec)))
File ~/polars-api-compat-dev/.venv/lib/python3.12/site-packages/dask/dataframe/groupby.py:875, in _build_agg_args(spec)
873 for funcs in by_name.values():
874 if len(funcs) != 1:
--> 875 raise ValueError(f"conflicting aggregation functions: {funcs}")
877 chunks = {}
878 aggs = {}
ValueError: conflicting aggregation functions: [('mean', 'b'), ('mean', 'b')]
```
**Anything else we need to know?**:
Spotted in Narwhals (because we are so awesome 😎 )
**Environment**:
- Dask version: 2024.12.1
- Python version: 3.12
- Operating System: linux
- Install method (conda, pip, source): pip
|
open
|
2024-12-18T13:25:01Z
|
2025-02-17T02:01:01Z
|
https://github.com/dask/dask/issues/11612
|
[
"needs attention",
"enhancement",
"dask-expr"
] |
MarcoGorelli
| 0
|
dynaconf/dynaconf
|
fastapi
| 1,140
|
[RFC]typed: Generate Sample Config File from Schema
|
Given
```py
class Database(DictValue):
"""Represents a database connection"""
host: Annotated[str, Doc("Database hostname")]
port: Annotated[int, Doc("Database port"), Gt(5000)] = 5432
class Settings(Dynaconf):
"""Settings for XPTO system."""
name: str
debug: bool = False
database: Database
options: NotRequired[dict[str, Any]]
```
On the CLI:
```console
$ dynaconf -i config.settings generate-settings --format yaml > settings.yaml
```
Generates
```yaml
# Settings for XPTO System
# type: str
# required
name:
# type: bool
# default False
debug: false
# type: dict
# Represents a database connection
# model: config.Database
database:
# Database Hostname
# type: str
# required
hostname:
# Database port
# type: int
# > 5000
# default 5432
port: 5432
# type: dict[str, Any]
# notrequired
# options:
```
|
open
|
2024-07-07T14:10:15Z
|
2024-07-08T18:38:21Z
|
https://github.com/dynaconf/dynaconf/issues/1140
|
[
"Not a Bug",
"RFC",
"typed_dynaconf"
] |
rochacbruno
| 0
|
statsmodels/statsmodels
|
data-science
| 9,014
|
Cannot cast ufunc 'subtract' output from dtype('float64') to dtype('int64') with casting rule 'same_kind'
|
please help me guys, I have problems with my coding, this is my first time using python.
---------------------------------------------------------------------------
UFuncTypeError Traceback (most recent call last)
Cell In[83], line 2
1 var_model = VARMAX(train_df, order=(4,0),enforce_stationarity= True)
----> 2 fitted_model = var_model.fit(disp=False)
3 print(fitted_model.summary())
File ~\anaconda3\Lib\site-packages\statsmodels\tsa\statespace\mlemodel.py:650, in MLEModel.fit(self, start_params, transformed, includes_fixed, cov_type, cov_kwds, method, maxiter, full_output, disp, callback, return_params, optim_score, optim_complex_step, optim_hessian, flags, low_memory, **kwargs)
530 """
531 Fits the model by maximum likelihood via Kalman filter.
532
(...)
647 statsmodels.tsa.statespace.structural.UnobservedComponentsResults
648 """
649 if start_params is None:
--> 650 start_params = self.start_params
651 transformed = True
652 includes_fixed = True
File ~\anaconda3\Lib\site-packages\statsmodels\tsa\statespace\varmax.py:348, in VARMAX.start_params(self)
346 if self.k_trend > 0 or self.k_exog > 0:
347 trendexog_params = np.linalg.pinv(exog).dot(endog)
--> 348 endog -= np.dot(exog, trendexog_params)
349 if self.k_trend > 0:
350 trend_params = trendexog_params[:self.k_trend].T
UFuncTypeError: Cannot cast ufunc 'subtract' output from dtype('float64') to dtype('int64') with casting rule 'same_kind'
|
open
|
2023-09-29T16:28:36Z
|
2023-09-29T16:30:52Z
|
https://github.com/statsmodels/statsmodels/issues/9014
|
[] |
laluna87
| 0
|
sunscrapers/djoser
|
rest-api
| 539
|
User create Django password validators problem
|
Hey guys, awesome work on `djoser`!
I encountered an error in `django==3.0.9` + `djoser==2.0.3`.
Django `MinimumLengthValidator` throws the following error (applicable to other Django validators as well):
```
Server Error: /v1/auth/users/
django_1 | Traceback (most recent call last):
django_1 | File "/pyroot/lib/python3.8/site-packages/djoser/serializers.py", line 54, in validate
django_1 | validate_password(password, user)
django_1 | File "/pyroot/lib/python3.8/site-packages/django/contrib/auth/password_validation.py", line 51, in validate_password
django_1 | raise ValidationError(errors)
django_1 | django.core.exceptions.ValidationError: ['This password is too short. It must contain at least 8 characters.']
django_1 |
django_1 | During handling of the above exception, another exception occurred:
django_1 |
django_1 | Traceback (most recent call last):
django_1 | File "/pyroot/lib/python3.8/site-packages/django/core/handlers/exception.py", line 34, in inner
django_1 | response = get_response(request)
django_1 | File "/pyroot/lib/python3.8/site-packages/django/core/handlers/base.py", line 115, in _get_response
django_1 | response = self.process_exception_by_middleware(e, request)
django_1 | File "/pyroot/lib/python3.8/site-packages/django/core/handlers/base.py", line 113, in _get_response
django_1 | response = wrapped_callback(request, *callback_args, **callback_kwargs)
django_1 | File "/pyroot/lib/python3.8/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
django_1 | return view_func(*args, **kwargs)
django_1 | File "/pyroot/lib/python3.8/site-packages/rest_framework/viewsets.py", line 114, in view
django_1 | return self.dispatch(request, *args, **kwargs)
django_1 | File "/pyroot/lib/python3.8/site-packages/rest_framework/views.py", line 505, in dispatch
django_1 | response = self.handle_exception(exc)
django_1 | File "/pyroot/lib/python3.8/site-packages/rest_framework/views.py", line 465, in handle_exception
django_1 | self.raise_uncaught_exception(exc)
django_1 | File "/pyroot/lib/python3.8/site-packages/rest_framework/views.py", line 476, in raise_uncaught_exception
django_1 | raise exc
django_1 | File "/pyroot/lib/python3.8/site-packages/rest_framework/views.py", line 502, in dispatch
django_1 | response = handler(request, *args, **kwargs)
django_1 | File "/pyroot/lib/python3.8/site-packages/rest_framework/mixins.py", line 18, in create
django_1 | serializer.is_valid(raise_exception=True)
django_1 | File "/pyroot/lib/python3.8/site-packages/rest_framework/serializers.py", line 234, in is_valid
django_1 | self._validated_data = self.run_validation(self.initial_data)
django_1 | File "/pyroot/lib/python3.8/site-packages/rest_framework/serializers.py", line 436, in run_validation
django_1 | value = self.validate(value)
django_1 | File "/pyroot/lib/python3.8/site-packages/djoser/serializers.py", line 58, in validate
django_1 | {"password": serializer_error["non_field_errors"]}
django_1 | KeyError: 'non_field_errors'
```
One way to go around this for now is:
```python
# common.password_validation.py
from django.contrib.auth import password_validation as pv
from django.core.exceptions import ValidationError as DjangoValidationError
class MinimumLengthValidator(pv.MinimumLengthValidator):
def validate(self, password, user=None):
try:
super().validate(password, user)
except DjangoValidationError as cause:
raise ValidationError(
detail=cause.messages[0] % {"min_length": cause.params["min_length"]},
)
```
|
open
|
2020-09-29T10:39:35Z
|
2022-10-23T18:11:54Z
|
https://github.com/sunscrapers/djoser/issues/539
|
[] |
ghost
| 3
|
feder-cr/Jobs_Applier_AI_Agent_AIHawk
|
automation
| 653
|
[HELP WANTED]: <Runtime error | Input should be a valid dictionary or instance of certifications>
|
### Issue description
**Not sure what I need to correct but this is what I keep getting when I try to run the bot:**
Runtime error: Error running the bot: Unexpected error while parsing YAML: 2 validation errors for Resume
certifications.0
Input should be a valid dictionary or instance of certifications [type=model_type, input_value='Certified Scrum Master', input_type=str]
For further information visit https://errors.pydantic.dev/2.9/v/model_type
certifications.1
Input should be a valid dictionary or instance of certifications [type=model_type, input_value='AWS Certified Solutions Architect, input_type=str]
For further information visit https://errors.pydantic.dev/2.9/v/model_type
|
closed
|
2024-10-29T03:23:19Z
|
2024-10-29T22:18:11Z
|
https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/653
|
[
"help wanted"
] |
ResearchingLife
| 1
|
Lightning-AI/pytorch-lightning
|
deep-learning
| 20,033
|
Can't save models via the ModelCheckpoint() when using custom optimizer
|
### Bug description
Dear all,
I want to use a [Hessian-Free LM optimizer](https://github.com/ltatzel/PyTorchHessianFree) replace the pytorch L-BFGS optimizer. However, the model can't be saved normally if I use the ModelCheckpoint(), while the torch.save() and Trainer.save_checkpoint() are still working. You can find my test python file in the following. Could you give me some suggestions to handle this problem?
Thanks!
### What version are you seeing the problem on?
v2.2
### How to reproduce the bug
```python
import numpy as np
import pandas as pd
import time
import torch
from torch import nn
from torch.utils.data import DataLoader,TensorDataset
import matplotlib.pyplot as plt
import lightning as L
from lightning.pytorch import LightningModule
from lightning.pytorch.loggers import CSVLogger
from lightning.pytorch.callbacks.model_checkpoint import ModelCheckpoint
from lightning.pytorch import Trainer
from lightning.pytorch.callbacks.early_stopping import EarlyStopping
from hessianfree.optimizer import HessianFree
class LitModel(LightningModule):
def __init__(self,loss):
super().__init__()
self.tanh_linear= nn.Sequential(
nn.Linear(1,20),
nn.Tanh(),
nn.Linear(20,20),
nn.Tanh(),
nn.Linear(20,1),
)
self.loss_fn = nn.MSELoss()
self.automatic_optimization = False
return
def forward(self, x):
out = self.tanh_linear(x)
return out
def configure_optimizers(self):
optimizer = HessianFree(
self.parameters(),
cg_tol=1e-6,
cg_max_iter=1000,
lr=1e0,
LS_max_iter=1000,
LS_c=1e-3
)
return optimizer
def training_step(self, batch, batch_idx):
x, y = batch
opt = self.optimizers()
def forward_fn():
y_pred = self(x)
loss=self.loss_fn(y_pred,y)
return loss,y_pred
opt.optimizer.step( forward=forward_fn)
loss,y_pred=forward_fn()
self.log("train_loss", loss, on_epoch=True, on_step=False)
return loss
def validation_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
val_loss = self.loss_fn(y_hat, y)
# passing to early_stoping
self.log("val_loss", val_loss, on_epoch=True, on_step=False)
return val_loss
def test_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = self.loss_fn(y_hat, y)
return loss
def main():
input_size = 20000
train_size = int(input_size*0.9)
test_size = input_size-train_size
batch_size = 1000
x_total = np.linspace(-1.0, 1.0, input_size, dtype=np.float32)
x_total = np.random.choice(x_total,size=input_size,replace=False) #random sampling
x_train = x_total[0:train_size]
x_train= x_train.reshape((train_size,1))
x_test = x_total[train_size:input_size]
x_test= x_test.reshape((test_size,1))
x_train=torch.from_numpy(x_train)
x_test=torch.from_numpy(x_test)
y_train = torch.from_numpy(np.sinc(10.0 * x_train))
y_test = torch.from_numpy(np.sinc(10.0 * x_test))
training_data = TensorDataset(x_train,y_train)
test_data = TensorDataset(x_test,y_test)
# Create data loaders.
train_dataloader = DataLoader(training_data, batch_size=batch_size
#,num_workers=2
)
test_dataloader = DataLoader(test_data, batch_size=batch_size
#,num_workers=2
)
for X, y in test_dataloader:
print("Shape of X: ", X.shape)
print("Shape of y: ", y.shape, y.dtype)
break
for X, y in train_dataloader:
print("Shape of X: ", X.shape)
print("Shape of y: ", y.shape, y.dtype)
break
loss_fn = nn.MSELoss()
model=LitModel(loss_fn)
# prepare trainer
opt_label=f'lm_HF_t20'
logger = CSVLogger(f"./{opt_label}", name=f"test-{opt_label}",flush_logs_every_n_steps=1)
epochs = 1e1
print(f"test for {opt_label}")
early_stop_callback = EarlyStopping(
monitor="val_loss"
, min_delta=1e-9
, patience=10
, verbose=False, mode="min"
, stopping_threshold = 1e-8 #stop if reaching accuracy
)
modelck=ModelCheckpoint(
dirpath = f"./{opt_label}"
, monitor="val_loss"
,save_last = True
#, save_top_k = 2
#, mode ='min'
#, every_n_epochs = 1
#, save_on_train_epoch_end=True
#,save_weights_only=True,
)
Train_model=Trainer(
accelerator="cpu"
, max_epochs = int(epochs)
, enable_progress_bar = True #using progress bar
#, callbacks=[modelck,early_stop_callback] # using earlystopping
, callbacks=[modelck] #do not using earlystopping
, logger=logger
#, num_processes = 16
)
t1=time.time()
Train_model.fit(model,train_dataloaders=train_dataloader, val_dataloaders=test_dataloader)
t2=time.time()
print('total time')
print(t2-t1)
# torch.save() and Trainer.save_checkpoint() can save the model, but ModelCheckpoint() can't.
#torch.save(model.state_dict(), f"model{opt_label}.pth")
#print(f"Saved PyTorch Model State to model{opt_label}.pth")
#Train_model.save_checkpoint(f"model{opt_label}.ckpt")
#print(f"Saved PL Model State to model{opt_label}.ckpt")
exit()
return
if __name__=='__main__':
main()
```
```
### Error messages and logs
```
# Error messages and logs here please
```
The program do not report error, but the ModelCheckpoint() can't save models when I use a custom optimizer.
### Environment
<details>
<summary>Current environment</summary>
* CUDA:
- GPU: None
- available: False
- version: 12.1
* Lightning:
- backpack-for-pytorch: 1.6.0
- lightning: 2.2.0
- lightning-utilities: 0.11.3.post0
- pytorch-lightning: 2.2.3
- torch: 2.2.0
- torchaudio: 2.0.1
- torchmetrics: 0.11.4
- torchvision: 0.15.1
* Packages:
- aiohttp: 3.9.1
- aiosignal: 1.3.1
- async-timeout: 4.0.3
- attrs: 23.2.0
- backpack-for-pytorch: 1.6.0
- bottleneck: 1.3.5
- certifi: 2022.12.7
- charset-normalizer: 3.1.0
- cmake: 3.26.0
- colorama: 0.4.6
- contourpy: 1.2.1
- cycler: 0.12.1
- einops: 0.8.0
- filelock: 3.10.0
- fonttools: 4.51.0
- frozenlist: 1.4.1
- fsspec: 2023.3.0
- hessianfree: 0.1
- idna: 3.4
- jinja2: 3.1.2
- kiwisolver: 1.4.5
- lightning: 2.2.0
- lightning-utilities: 0.11.3.post0
- lit: 15.0.7
- markupsafe: 2.1.2
- matplotlib: 3.8.4
- mpmath: 1.3.0
- multidict: 6.0.4
- networkx: 3.0
- numexpr: 2.8.4
- numpy: 1.24.2
- nvidia-cublas-cu11: 11.10.3.66
- nvidia-cublas-cu12: 12.1.3.1
- nvidia-cuda-cupti-cu11: 11.7.101
- nvidia-cuda-cupti-cu12: 12.1.105
- nvidia-cuda-nvrtc-cu11: 11.7.99
- nvidia-cuda-nvrtc-cu12: 12.1.105
- nvidia-cuda-runtime-cu11: 11.7.99
- nvidia-cuda-runtime-cu12: 12.1.105
- nvidia-cudnn-cu11: 8.5.0.96
- nvidia-cudnn-cu12: 8.9.2.26
- nvidia-cufft-cu11: 10.9.0.58
- nvidia-cufft-cu12: 11.0.2.54
- nvidia-curand-cu11: 10.2.10.91
- nvidia-curand-cu12: 10.3.2.106
- nvidia-cusolver-cu11: 11.4.0.1
- nvidia-cusolver-cu12: 11.4.5.107
- nvidia-cusparse-cu11: 11.7.4.91
- nvidia-cusparse-cu12: 12.1.0.106
- nvidia-nccl-cu11: 2.14.3
- nvidia-nccl-cu12: 2.19.3
- nvidia-nvjitlink-cu12: 12.3.101
- nvidia-nvtx-cu11: 11.7.91
- nvidia-nvtx-cu12: 12.1.105
- packaging: 23.0
- pandas: 1.5.3
- pillow: 9.4.0
- pip: 24.1.1
- pyparsing: 3.1.2
- python-dateutil: 2.8.2
- pytorch-lightning: 2.2.3
- pytz: 2022.7
- pyyaml: 6.0
- requests: 2.28.2
- setuptools: 67.6.0
- six: 1.16.0
- sympy: 1.11.1
- torch: 2.2.0
- torchaudio: 2.0.1
- torchmetrics: 0.11.4
- torchvision: 0.15.1
- tqdm: 4.65.0
- triton: 2.2.0
- typing-extensions: 4.11.0
- unfoldnd: 0.2.1
- urllib3: 1.26.15
- wheel: 0.40.0
- yarl: 1.9.4
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.10.9
- release: 3.10.0-862.el7.x86_64
- version: #1 SMP Fri Apr 20 16:44:24 UTC 2018
</details>
### More info
_No response_
|
open
|
2024-07-01T08:54:11Z
|
2024-07-01T08:54:11Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/20033
|
[
"bug",
"needs triage"
] |
youli-jlu
| 0
|
AirtestProject/Airtest
|
automation
| 508
|
我想针对Unity或者Unreal的编辑器进行点击测试,发现匹配不到截图
|
**描述问题bug**
我想针对Unity或者Unreal的编辑器进行点击测试,发现匹配不到截图
```
"D:\Program\AirtestIDE_2019-05-09_py3_win64\AirtestIDE" runner "C:\Users\HASEE\AppData\Local\Temp\AirtestIDE\Scripts\untitled.air" --device Windows:///992732 --log "C:\Users\HASEE\AppData\Local\Temp\AirtestIDE\scripts\3f0954b11e33f5b714bfc3ac7ad1925d"
============================================================
[Start running..]
save log in 'C:\Users\HASEE\AppData\Local\Temp\AirtestIDE\scripts\3f0954b11e33f5b714bfc3ac7ad1925d'
D:\Program\AirtestIDE_2019-05-09_py3_win64\pywinauto\application.py:1032: UserWarning: 32-bit application should be automated using 32-bit Python (you use 64-bit Python)
[01:09:27][INFO]<airtest.core.api> Try finding:
Template(C:\Users\HASEE\AppData\Local\Temp\AirtestIDE\Scripts\untitled.air\tpl1567050651716.png)
[01:09:28][DEBUG]<airtest.core.api> resize: (50, 57)->(1, 1), resolution: (1936, 1056)=>(0, 0)
[01:09:28][DEBUG]<airtest.core.api> try match with SURFMatching
[01:09:28][DEBUG]<airtest.aircv.keypoint_base> find_best_result() run time is 0.00 s.
[01:09:28][DEBUG]<airtest.core.api> try match with TemplateMatching
[01:09:28][DEBUG]<airtest.core.api> 'error: in template match, found im_search bigger than im_source.'
[01:09:28][DEBUG]<airtest.core.api> try match with BRISKMatching
[01:09:28][DEBUG]<airtest.aircv.keypoint_base> find_best_result() run time is 0.00 s.
[01:09:28][DEBUG]<airtest.core.api> match result: None
[01:09:28][DEBUG]<airtest.core.api> resize: (50, 57)->(1, 1), resolution: (1936, 1056)=>(0, 0)
....
....
....
....
[01:09:47][DEBUG]<airtest.core.api> try match with BRISKMatching
[01:09:47][DEBUG]<airtest.aircv.keypoint_base> find_best_result() run time is 0.00 s.
[01:09:47][DEBUG]<airtest.core.api> match result: None
[01:09:47][DEBUG]<airtest.core.api> resize: (50, 57)->(1, 1), resolution: (1936, 1056)=>(0, 0)
[01:09:47][DEBUG]<airtest.core.api> try match with SURFMatching
[01:09:47][DEBUG]<airtest.aircv.keypoint_base> find_best_result() run time is 0.00 s.
[01:09:47][DEBUG]<airtest.core.api> try match with TemplateMatching
[01:09:47][DEBUG]<airtest.core.api> 'error: in template match, found im_search bigger than im_source.'
[01:09:47][DEBUG]<airtest.core.api> try match with BRISKMatching
[01:09:47][DEBUG]<airtest.aircv.keypoint_base> find_best_result() run time is 0.00 s.
[01:09:47][DEBUG]<airtest.core.api> match result: None
======================================================================
ERROR: runTest (app.widgets.code_runner.ide_launcher.AirtestIDECase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "airtest\cli\runner.py", line 65, in runTest
File "site-packages\six.py", line 693, in reraise
File "airtest\cli\runner.py", line 61, in runTest
File "C:\Users\HASEE\AppData\Local\Temp\AirtestIDE\Scripts\untitled.air\untitled.py", line 6, in <module>
touch(Template(r"tpl1567050651716.png", record_pos=(-0.013, 0.072), resolution=(1936, 1056)))
File "airtest\utils\logwraper.py", line 72, in wrapper
File "airtest\core\api.py", line 257, in touch
File "airtest\utils\logwraper.py", line 72, in wrapper
File "airtest\core\cv.py", line 76, in loop_find
File "airtest\utils\logwraper.py", line 72, in wrapper
File "airtest\core\cv.py", line 100, in try_log_screen
File "airtest\aircv\aircv.py", line 26, in imwrite
cv2.error: OpenCV(3.4.2) C:\projects\opencv-python\opencv\modules\imgcodecs\src\grfmt_base.cpp:145: error: (-10:Unknown error code -10) Raw image encoder error: Empty JPEG image (DNL not supported) in function 'cv::BaseImageEncoder::throwOnEror'
----------------------------------------------------------------------
Ran 1 test in 20.377s
FAILED (errors=1)
[Finished]
============================================================
```
**相关截图**
(贴出遇到问题时的截图内容,如果有的话)
(在AirtestIDE里产生的图像和设备相关的问题,请贴一些AirtestIDE控制台黑窗口相关报错信息)
![Uploading image.png…]()
**复现步骤**
1.通过touch选中虚幻编辑器里面的运行按钮
2. 点击AirtestIDE里面的运行
**预期效果**
应该能触发虚幻引擎进入到运行状态
**python 版本:** `python3.5`
**airtest IDE版本:** `1.2.1`
> airtest版本通过`pip freeze`可以命令可以查到
**设备:**
- 型号: [e.g. PC i7 9700K Nvidia 1080 双屏幕 ]
- 系统: [e.g. win10]
- (别的信息)
**其他相关环境信息**
针对sourcetree或者vscode都可以正常捕获到按钮
|
closed
|
2019-08-29T05:16:06Z
|
2019-08-29T05:28:10Z
|
https://github.com/AirtestProject/Airtest/issues/508
|
[] |
t1633361
| 1
|
graphql-python/graphene-sqlalchemy
|
sqlalchemy
| 283
|
Question — Is it possible order results according to a function of the input query?
|
I've posted the same question on [Stackoverflow](https://stackoverflow.com/q/63249166/1720199), but I though this might be related to your concerns.
Here some selected content from this question:
---
I'm using graphene with sqlalchemy and I have an output object that contains a computed field. The field is computed according to some input query parameters. To simplify, lets consider the computation of a function *f(x)=ax+b* where *a* and *b* are both columns in my `Thing` table and `x` is a variable (grahpql query parameter):
```python
import models
class Thing(SQLAlchemyObjectType):
class Meta:
model = models.Thing
interfaces = (relay.Node, )
f = graphene.Field(graphene.Float)
def resolve_f(self, info):
return self.a * info.context['x'] + self.b
```
In my query I have the following and I would like to sort fields according to the output of the function `f`:
```python
class Query(graphene.ObjectType):
best_points = graphene.List(lambda: Thing, x = graphene.Float())
def resolve_best_points(self, info, x):
query = Thing.get_query(info)
return query.all()
```
Is there a way to achieve this in a clean way? Adding some option in `Thing`? Or maybe obtaining the output values within the resolver and then sorting? Or something a bit uglier like adding a middleware to sort outputs from `resolve_best_points`?
[1]: https://stackoverflow.com/questions/16966163/sqlalchemy-order-by-calculated-column
|
closed
|
2020-08-06T10:46:33Z
|
2023-02-25T00:48:41Z
|
https://github.com/graphql-python/graphene-sqlalchemy/issues/283
|
[] |
cglacet
| 2
|
ultralytics/ultralytics
|
computer-vision
| 19,794
|
Permission issue encountered while training YOLO11 on custom datasets.
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Train
### Bug
I have multiple datasets for training YOLO11. To minimize the use of dist, I use unionfs-fuse to merge all the datasets. However, when training YOLO11 on the merged datasets, an error occurs due to permission denial for some images. Interestingly, when I re-merge all the datasets by copy, the issue does not reoccur.
### Environment
OS Linux-5.15.0-113-generic-x86_64-with-glibc2.31
Environment Linux
Python 3.9.20
Install pip
Path /home/ad/anaconda3/envs/LLMS/lib/python3.9/site-packages/ultralytics
RAM 376.54 GB
Disk 800.4/846.7 GB
CPU Intel Xeon Platinum 8260 2.40GHz
CPU count 96
GPU NVIDIA RTX A6000, 48577MiB
GPU count 8
CUDA 12.1
numpy ✅ 2.0.2<=2.1.1,>=1.23.0
matplotlib ✅ 3.9.3>=3.3.0
opencv-python ✅ 4.10.0.82>=4.6.0
pillow ✅ 10.4.0>=7.1.2
pyyaml ✅ 6.0.2>=5.3.1
requests ✅ 2.32.3>=2.23.0
scipy ✅ 1.13.1>=1.4.1
torch ✅ 2.5.1>=1.8.0
torch ✅ 2.5.1!=2.4.0,>=1.8.0; sys_platform == "win32"
torchvision ✅ 0.20.1>=0.9.0
tqdm ✅ 4.66.1>=4.64.0
psutil ✅ 6.1.0
py-cpuinfo ✅ 9.0.0
pandas ✅ 2.2.3>=1.1.4
seaborn ✅ 0.13.2>=0.11.0
ultralytics-thop ✅ 2.0.14>=2.0.0
### Minimal Reproducible Example
can't
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR!
|
open
|
2025-03-20T08:00:34Z
|
2025-03-21T04:27:23Z
|
https://github.com/ultralytics/ultralytics/issues/19794
|
[] |
keeganNull
| 2
|
521xueweihan/HelloGitHub
|
python
| 2,902
|
【开源自荐】🏎 Nping: 使用 Rust 开发的实时可视化终端 Ping 命令
|
- 项目地址: https://github.com/hanshuaikang/Nping
- 类别:Rust
- 项目标题:使用 Rust 开发的实时可视化终端 Ping 命令
- 项目描述:Nping 顾名思义, 牛批的 Ping, 是一个用 Rust 开发可视化 Ping 工具。主要的特性有:
- 支持多域名分区图方式实时预览结果
- 支持同时 ping 单域名下的所有 ip
- 支持实时排序的 表格视图
- 支持 Ipv6
- 运行预览


|
closed
|
2025-02-16T05:45:30Z
|
2025-03-10T10:06:25Z
|
https://github.com/521xueweihan/HelloGitHub/issues/2902
|
[
"已发布"
] |
hanshuaikang
| 0
|
davidsandberg/facenet
|
computer-vision
| 278
|
Embeddings differ when run in a batch or single image.
|
I am getting different results depending on how many images I test with in a batch or when I evaluate (order of execution). It almost seems as though batch normalization or something else is being updated as if in train mode. Has anyone had any luck running inference on a single image? Shouldn't the result be the same for a given image no matter when you run it or how many are in the batch?
|
closed
|
2017-05-16T23:14:14Z
|
2017-05-17T21:24:44Z
|
https://github.com/davidsandberg/facenet/issues/278
|
[] |
colojaro
| 1
|
mwaskom/seaborn
|
matplotlib
| 3,713
|
Setting custom formatter in so.Nominal().label() seems to do nothing
|
When using the `so.Nominal()` scale of the objects interface, calling `label()` with a custom formatter seems to have no effect. Could also be that I am using the function in an incorrect way, since the API and documentation on the nominal scale are still being worked on.
Demo:
```py
import matplotlib as mpl
import numpy as np
import pandas as pd
import seaborn.objects as so
xs = range(10)
ys = np.random.random(10)
df = pd.DataFrame(dict(x=xs, y=ys))
formatter = mpl.ticker.FuncFormatter(lambda x, pos: str(x) if x % 3 == 0 else "")
(so.Plot(df, x="x", y="y")
.add(so.Dots())
.scale(x=so.Nominal().label(formatter))
)
```

The x-axis ticks are correctly transformed to a nominal scale. However, I would expect to only see tick labels that divide by 3 based on the formatter.
Tested on Seaborn v0.13.2 and Matplotlib v3.9.0
|
open
|
2024-06-20T14:51:13Z
|
2024-10-18T10:19:02Z
|
https://github.com/mwaskom/seaborn/issues/3713
|
[] |
rbergm
| 1
|
predict-idlab/plotly-resampler
|
plotly
| 124
|
`max_n_samples` in add trace(s) seems to do nothing with `plotly-resampler=0.8.1`
|
I was able to pinpoint the issue, and it seems that this line within `AbstractFigureResampler` causes the issue:
```python
super(self._figure_class, self).add_trace(trace, **trace_kwargs)
```
Which calls the `BaseTraceDatatype` its `add_trace` method, which on its end:

calls `self.add_trace` (bringing us back to the `AbastractFigureResampler` class, and as a result, the trace will be aggregated.
|
closed
|
2022-10-19T15:00:43Z
|
2022-10-22T19:04:12Z
|
https://github.com/predict-idlab/plotly-resampler/issues/124
|
[
"bug"
] |
jonasvdd
| 1
|
allure-framework/allure-python
|
pytest
| 403
|
Unable to see pytest assume failure information in Allure.
|
Environment:
Python 3.7.3
Pytest 5.0.0
Allure-pytest 2.7.0
Code snippets:
import pytest
def add(a, b):
return a + b
def test_add1():
assert add(3, 4) == 7
def test_add2():
pytest.assume(add(1, 2) == 4)
print('run?')
pytest.assume(add(2, 2) == 5)
print('done')
def test_add3():
assert add(4, 5) == 8
if __name__=='__main__':
pytest.main("Workflow 1.py")

The failed information could be seen in pytest-html.
|
closed
|
2019-07-16T03:10:48Z
|
2019-07-17T00:32:25Z
|
https://github.com/allure-framework/allure-python/issues/403
|
[] |
shinobi01
| 1
|
numpy/numpy
|
numpy
| 28,048
|
BUG: Race adding legacy casts to custom dtype under free threading
|
### Describe the issue:
If the following code is run under Python 3.13.1t, it fails nondeterministically with `A cast was already added for <class 'numpy.dtype[rational]'> -> <class 'numpy.dtypes.Int8DType'>. (method: legacy_cast)`.
### Reproduce the code example:
```python
import concurrent.futures
import functools
import threading
import numpy as np
import numpy._core._rational_tests as _rational_tests
num_threads = 1000
def closure(b):
b.wait()
for _ in range(100):
np.full((10, 10), 1, _rational_tests.rational)
with concurrent.futures.ThreadPoolExecutor(max_workers=num_threads) as executor:
b = threading.Barrier(num_threads)
futures = [executor.submit(functools.partial(closure, b)) for _ in range(num_threads)]
[f.result() for f in futures]
```
### Error message:
```shell
Traceback (most recent call last):
File "/Users/goldbaum/.pyenv/versions/3.13.1t/lib/python3.13t/site-packages/numpy/_core/numeric.py", line 353, in full
multiarray.copyto(a, fill_value, casting='unsafe')
RuntimeError: A cast was already added for <class 'numpy.dtype[rational]'> -> <class 'numpy.dtypes.Int8DType'>. (method: legacy_cast)
```
### Python and NumPy Versions:
2.3.0.dev0+git20241219.35b2c4a
3.13.1 experimental free-threading build (tags/v3.13.1:06714517797, Dec 15 2024, 15:38:01) [Clang 18.1.8 (11)]
### Runtime Environment:
[{'numpy_version': '2.3.0.dev0+git20241219.35b2c4a',
'python': '3.13.1 experimental free-threading build '
'(tags/v3.13.1:06714517797, Dec 15 2024, 15:38:01) [Clang 18.1.8 '
'(11)]',
'uname': uname_result(system='Linux', node='', release='', version='https://github.com/https://github.com/numpy/numpy/pull/1 SMP PREEMPT_DYNAMIC Debian 6.redacted (2024-10-16)', machine='x86_64')},
{'simd_extensions': {'baseline': ['SSE', 'SSE2', 'SSE3'],
'found': ['SSSE3',
'SSE41',
'POPCNT',
'SSE42',
'AVX',
'F16C',
'FMA3',
'AVX2'],
'not_found': ['AVX512F',
'AVX512CD',
'AVX512_KNL',
'AVX512_SKX',
'AVX512_CLX',
'AVX512_CNL',
'AVX512_ICL']}},
{'architecture': 'Zen',
'filepath': '/usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblasp-r0.3.27.so',
'internal_api': 'openblas',
'num_threads': 128,
'prefix': 'libopenblas',
'threading_layer': 'pthreads',
'user_api': 'blas',
'version': '0.3.27'}]
### Context for the issue:
Found when working on free-threading support in JAX.
|
closed
|
2024-12-20T21:17:25Z
|
2025-02-11T16:55:16Z
|
https://github.com/numpy/numpy/issues/28048
|
[
"00 - Bug",
"39 - free-threading"
] |
hawkinsp
| 17
|
labmlai/annotated_deep_learning_paper_implementations
|
deep-learning
| 113
|
ResNet: replace x with \times
|
In the paragraph about **[Bottleneck Residual Block](https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/e75e53bb03bc3ab68ce61699c0fcf280d4cfb3d6/labml_nn/resnet/__init__.py#L158)** in the ResNet paper implementation, a few latex formulas are wrong and should be corrected.
https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/e75e53bb03bc3ab68ce61699c0fcf280d4cfb3d6/labml_nn/resnet/__init__.py#L169
and
https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/e75e53bb03bc3ab68ce61699c0fcf280d4cfb3d6/labml_nn/resnet/__init__.py#L189
There are more identical mistakes in the aforementioned section.
|
closed
|
2022-03-23T11:50:56Z
|
2022-04-10T08:12:11Z
|
https://github.com/labmlai/annotated_deep_learning_paper_implementations/issues/113
|
[] |
arxaqapi
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.