repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
FactoryBoy/factory_boy
|
django
| 136
|
Default to null for nullable OneToOneField in Django
|
I can't figure from the docs how to achieve this.
I have a model with a 1to1 field, which is nullable. If I don't specify the relation somehow, I can't create the related model instance using __ syntax. But if I do, it appears to create the related mode no matter what.
How can I have it only create the related model if I pass args for it, and leave it null otherwise?
|
closed
|
2014-02-28T02:46:51Z
|
2014-09-03T21:44:34Z
|
https://github.com/FactoryBoy/factory_boy/issues/136
|
[] |
funkybob
| 1
|
pandas-dev/pandas
|
pandas
| 60,305
|
API: how to check for "logical" equality of dtypes?
|
Assume you have a series, which has a certain dtype. In the case that this dtype is an instance of potentially multiple variants of a logical dtype (for example, string backed by python or backed by pyarrow), how do you check for the "logical" equality of such dtypes?
For checking the logical equality for one series, you have the option to compare it with the generic string alias (which will return True for any variant of it) or checking the dtype with `isinstance` or some `is_..._dtype` (although we have deprecated some of those). Using string dtype as the example:
```python
ser.dtype == "string"
# or
isinstance(ser.dtype, pd.StringDtype)
pd.api.types.is_string_dtype(ser.dtype)
```
When you want to check if two serieses have the same dtype, the `==` will check for exact equality (in the string dtype example, the below can evaluate to False even if both are a StringDtype, but have a different storage):
```python
ser1.dtype == ser2.dtype
```
**But so how to check this logical equality for two dtypes?** In the example, how to know that both dtypes are representing the same logical dtype (i.e. both a StringDtype instance), without necessarily wanting to check the exact type (i.e. the user doesn't necessarily know it are string dtypes, just want to check if they are logically the same)
```python
# this might work?
type(ser1.dtype) == type(ser2.dtype)
```
Do we want some other API here that is a bit more user friendly? (just brainstorming, something like `dtype1.is_same_type(dtype2)`, or a function, ..)
---
This is important in the discussion around logical dtypes (https://github.com/pandas-dev/pandas/pull/58455), but so it is already an issue for the new string dtype as well in pandas 3.0
cc @WillAyd @Dr-Irv @jbrockmendel (tagging some people that were most active in the logical dtypes discussion)
|
open
|
2024-11-13T18:00:22Z
|
2024-12-25T16:19:32Z
|
https://github.com/pandas-dev/pandas/issues/60305
|
[
"API Design"
] |
jorisvandenbossche
| 11
|
pytorch/pytorch
|
machine-learning
| 148,939
|
Whether the transposed tensor is contiguous affects the results of the subsequent Linear layer.
|
### 🐛 Describe the bug
I found that whether the transposed tensor is contiguous affects the results of the subsequent Linear layer. I want to know if it is a bug or not?
```
import torch
from torch import nn
x = torch.randn(3, 4).transpose(0, 1) # 非连续张量(转置后)
linear = nn.Linear(3, 2)
y1 = linear(x) # 非连续输入
y2 = linear(x.contiguous()) # 连续输入
print(torch.allclose(y1, y2)) # True
x = torch.randn(2,226,1024).transpose(0, 1) # 非连续张量(转置后)
linear = nn.Linear(1024, 64)
y1 = linear(x) # 非连续输入
y2 = linear(x.contiguous()) # 连续输入
print(torch.allclose(y1, y2)) # False ???
```
### Versions
PyTorch version: 2.5.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.5
Libc version: glibc-2.35
Python version: 3.11.10 | packaged by conda-forge | (main, Oct 16 2024, 01:27:36) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA L20
GPU 1: NVIDIA L20
Nvidia driver version: 570.86.15
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 44
On-line CPU(s) list: 0-43
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8457C
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 22
Socket(s): 1
Stepping: 8
BogoMIPS: 5200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 wbnoinvd arat avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid cldemote movdiri movdir64b fsrm md_clear serialize tsxldtrk arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1 MiB (22 instances)
L1i cache: 704 KiB (22 instances)
L2 cache: 44 MiB (22 instances)
L3 cache: 97.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-43
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Unknown: No mitigations
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] optree==0.13.0
[pip3] pytorch-fid==0.3.0
[pip3] torch==2.5.0+cu124
[pip3] torch-fidelity==0.3.0
[pip3] torchao==0.9.0
[pip3] torchaudio==2.5.0+cu124
[pip3] torchelastic==0.2.2
[pip3] torchmetrics==1.6.1
[pip3] torchvision==0.20.0+cu124
[pip3] triton==3.1.0
[conda] numpy 2.1.2 py311h71ddf71_0 conda-forge
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] pytorch-fid 0.3.0 pypi_0 pypi
[conda] torch 2.5.0+cu124 pypi_0 pypi
[conda] torch-fidelity 0.3.0 pypi_0 pypi
[conda] torchao 0.9.0 pypi_0 pypi
[conda] torchaudio 2.5.0+cu124 pypi_0 pypi
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchmetrics 1.6.1 pypi_0 pypi
[conda] torchvision 0.20.0+cu124 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @frank-wei @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
|
open
|
2025-03-11T02:39:40Z
|
2025-03-18T02:47:42Z
|
https://github.com/pytorch/pytorch/issues/148939
|
[
"needs reproduction",
"module: nn",
"triaged",
"module: intel"
] |
pikerbright
| 3
|
nerfstudio-project/nerfstudio
|
computer-vision
| 2,863
|
Splatfacto new POV image quality
|
Hi,
It not really a bug, but more a question about splatfacto image quality/parameters. If it is not the right place, just let me know.
I am working to reconstruct driving scene from pandaset. Camera poses are along a car trajectory. When I render an image from this trajectory (train or eval set) the quality is very good.
Below a example of one image from eval set:

But if I move a bit the target vehicle and the ego camera (1m on left and 0.5m up), the camera direction stays very close to the original one.

The quality decrease very quickly. The quality is very sensitive to direction.
I tried to reduce sh degree from 3 to 1, to avoid to much overfitting, with no really improvement.
Sure that a driving scene images is less dense, than a unique object scene with video sequence around it. But few months ago, I did it with nerfacto, the quality from initial poses was less, but really less sensible to new POV. Below a short video of nerfacto reconstruction from 5 cameras (front, side, and front-side) on the vehicle:
https://github.com/nerfstudio-project/nerfstudio/assets/42007976/8b347e4e-325c-401e-bed8-27896f160cc1
I improved my results with nerfacto by using camera pose opt. So I tried to do it for splatfacto by cherry pick viewmat gradient backward from gsplat branch https://github.com/nerfstudio-project/gsplat/tree/vickie/camera-grads. But for now, it didn't improve it.
In the same way, if I tried with multi cameras (or just a side camera in place of the front), the quality is very less impressive. Below example with 5 cameras and cameras optimization (but I seem to not have a big influence).
Left view:

Front view

Do you have any idea, about what I should try firstly to improve new POV synthesis and secondly to improve multi cameras reconstruction?
And did someone work one camera pose optimization for SplatFacto ?
And just for the fun, below an example of what we can do with objects and multiples splatFacto instance:
https://github.com/nerfstudio-project/nerfstudio/assets/42007976/8f85ddef-bb3c-481f-b0f5-8490ae53603a
Thanks for your inputs, I will go to implement depth loss from lidar to see if it help.
|
open
|
2024-02-01T18:17:45Z
|
2024-06-28T20:02:04Z
|
https://github.com/nerfstudio-project/nerfstudio/issues/2863
|
[] |
pierremerriaux-leddartech
| 17
|
nl8590687/ASRT_SpeechRecognition
|
tensorflow
| 109
|
第一层的卷积核个数为什么选择32个
|
closed
|
2019-04-26T08:54:29Z
|
2020-05-11T05:48:04Z
|
https://github.com/nl8590687/ASRT_SpeechRecognition/issues/109
|
[] |
myrainbowandsky
| 0
|
|
GibbsConsulting/django-plotly-dash
|
plotly
| 292
|
"find apps" in admin creates almost duplicate app in django_plotly_dash_statelessapp
|
I have created the example app SimpleExample.
When running the django project for the first time, the table `django_plotly_dash_statelessapp` is empty.
When I first run the Dash app, the table contains a new row with `name="SimpleExample"` and `slug="simpleexample"`.
However, when I run the "find apps" command in the admin panel, I get a second app with `name="simpleexample"` and `slug="simpleexample2"`. This is due to the `name = slugify(name)` found https://github.com/GibbsConsulting/django-plotly-dash/blob/master/django_plotly_dash/dash_wrapper.py#L105 and https://github.com/GibbsConsulting/django-plotly-dash/blob/master/django_plotly_dash/dash_wrapper.py#L123.
I guess this is not expected (there should be only one row in this table for the SimpleExample app).
As there is already the `slug` field for the slugified version of the app name, I suspect the two lines should be commented. If I do so, the "find apps" does not create a new row in the table.
Maybe linked to issue #263 ?
|
closed
|
2020-11-24T05:27:33Z
|
2021-02-03T05:17:08Z
|
https://github.com/GibbsConsulting/django-plotly-dash/issues/292
|
[] |
sdementen
| 1
|
plotly/dash
|
plotly
| 2,744
|
[BUG] Duplicate callback outputs
|
Hello:
I tried using the same `Input`, but there are some differences in the `Output`, but dash prompts me that I need to increase `allow_duplicate = True`。

example dash:
```python
from dash import Dash, html,callback,Output,Input,no_update
app = Dash(__name__)
app.layout = html.Div([
html.Div(children='output1',id="output1"),
html.Div(children='output2',id="output2"),
html.Button("button",id = "button")
])
@callback(
Output("output1","children",allow_duplicate=True),
Input("button","n_clicks"),
prevent_initial_call=True,
)
def out1(n_clicks):
return no_update
@callback(
Output("output1","children",allow_duplicate=True),
Output("output2","children"),
Input("button","n_clicks"),
prevent_initial_call=True,
)
def out2(n_clicks):
return no_update,no_update
if __name__ == '__main__':
app.run(debug=True)
```
If I change one of the `Output` parameters to `allow_duplicate=False`, no error is reported.
Thanks.
|
closed
|
2024-02-05T06:24:51Z
|
2024-02-29T15:01:55Z
|
https://github.com/plotly/dash/issues/2744
|
[] |
Liripo
| 4
|
jina-ai/serve
|
deep-learning
| 5,422
|
`metadata.uid` field doesn't exist in generated k8s YAMLs
|
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
In the [generated YAML file](https://github.com/jina-ai/jina/blob/master/jina/resources/k8s/template/deployment-executor.yml), it has a reference to `metadata.uid`: `fieldPath: metadata.uid`. But looks like in `metadata` there's no `uid`:
```
metadata:
name: {name}
namespace: {namespace}
```
This is sometimes causing problem for our Operator use case; we had to manually remove the `POD_UID` block generated by `jina`. Are we going to address this?
**Describe how you solve it**
<!-- copy past your code/pull request link -->
---
<!-- Optional, but really help us locate the problem faster -->
**Environment**
<!-- Run `jina --version-full` and copy paste the output here -->
**Screenshots**
<!-- If applicable, add screenshots to help explain your problem. -->
|
closed
|
2022-11-22T02:22:01Z
|
2022-12-01T08:33:06Z
|
https://github.com/jina-ai/serve/issues/5422
|
[] |
zac-li
| 0
|
BeanieODM/beanie
|
asyncio
| 246
|
_saved_state and _previous_revision_id saved in DB
|
These fields should not be stored in the database

|
closed
|
2022-04-25T12:41:42Z
|
2023-05-23T18:41:53Z
|
https://github.com/BeanieODM/beanie/issues/246
|
[
"snippet requested"
] |
nichitag
| 3
|
PaddlePaddle/ERNIE
|
nlp
| 76
|
关于msra标签增加报错
|

分类报错.请问在什么地方可以更改分类数
|
closed
|
2019-04-04T07:24:54Z
|
2019-04-17T16:16:49Z
|
https://github.com/PaddlePaddle/ERNIE/issues/76
|
[] |
jtyoui
| 1
|
sinaptik-ai/pandas-ai
|
data-visualization
| 1,455
|
failed to solve: process "/bin/sh -c npm run build" did not complete successfully: exit code: 1
|
![Uploading Screenshot 2024-12-07 at 13.39.02.png…]()
------
> [client 6/6] RUN npm run build:
0.585
0.585 > client@0.1.0 build
0.585 > next build
0.585
1.266 Attention: Next.js now collects completely anonymous telemetry regarding usage.
1.266 This information is used to shape Next.js' roadmap and prioritize features.
1.267 You can learn more, including how to opt-out if you'd not like to participate in this anonymous program, by visiting the following URL:
1.267 https://nextjs.org/telemetry
1.267
1.336 ▲ Next.js 14.2.3
1.336 - Environments: .env
1.336
1.406 Creating an optimized production build ...
41.70 ✓ Compiled successfully
41.70 Skipping linting
41.70 Checking validity of types ...
46.59 Failed to compile.
46.59
46.59 ./app/(ee)/settings/layout.tsx:9:11
46.59 Type error: 'SettingsLayout' cannot be used as a JSX component.
46.59 Its type '({ children, }: { children: ReactNode; }) => Promise<Element>' is not a valid JSX element type.
46.59 Type '({ children, }: { children: ReactNode; }) => Promise<Element>' is not assignable to type '(props: any, deprecatedLegacyContext?: any) => ReactNode'.
46.59 Type 'Promise<Element>' is not assignable to type 'ReactNode'.
46.59
46.59 7 | children: React.ReactNode;
46.59 8 | }) {
46.59 > 9 | return <SettingsLayout>{children}</SettingsLayout>;
46.59 | ^
46.59 10 | }
46.59 11 |
46.59 12 |
------
failed to solve: process "/bin/sh -c npm run build" did not complete successfully: exit code: 1
|
closed
|
2024-12-07T08:09:21Z
|
2024-12-16T11:48:47Z
|
https://github.com/sinaptik-ai/pandas-ai/issues/1455
|
[] |
shivbhor
| 2
|
recommenders-team/recommenders
|
machine-learning
| 1,614
|
[BUG] Error on als_movielens.ipynb
|
### Description
<!--- Describe your issue/bug/request in detail -->
Hello :)
I found **two** bugs on [ALS Tutorial](https://github.com/microsoft/recommenders/blob/main/examples/00_quick_start/als_movielens.ipynb).
### 1. Error at `data = movielens.load_spark_df(spark, size=MOVIELENS_DATA_SIZE, schema=schema)`
This occurs when I use pyspark==2.3.1.
I used pyspark==2.3.1 because [tutorial](https://github.com/microsoft/recommenders/blob/main/examples/00_quick_start/als_movielens.ipynb) was based on this version,
but it fails.
But when I use pyspark==3.2.0, it works.

### 2. Join operation fail at
```
dfs_pred_exclude_train = dfs_pred.alias("pred").join(
train.alias("train"),
(dfs_pred[COL_USER] == train[COL_USER]) & (dfs_pred[COL_ITEM] == train[COL_ITEM]),
how='outer'
)
```
As version 2.3.1 fails, I tried this with pyspark==3.2.0.
However, `AnalysisException` occurs.

I think this is also a version issue.
### In which platform does it happen?
<!--- Describe the platform where the issue is happening (use a list if needed) -->
<!--- For example: -->
<!--- * Azure Data Science Virtual Machine. -->
<!--- * Azure Databricks. -->
<!--- * Other platforms. -->
- Google Colab
- System version: 3.7.12 (GCC 7.5.0)
- Spark version: 2.3.1, 3.2.0
### How do we replicate the issue?
<!--- Please be specific as possible (use a list if needed). -->
<!--- For example: -->
<!--- * Create a conda environment for pyspark -->
<!--- * Run unit test `test_sar_pyspark.py` with `pytest -m 'spark'` -->
<!--- * ... -->
Run https://github.com/microsoft/recommenders/blob/main/examples/00_quick_start/als_movielens.ipynb on Colab
### Expected behavior (i.e. solution)
<!--- For example: -->
<!--- * The tests for SAR PySpark should pass successfully. -->
Version issues should be resolved.
Join operations should be done without error.
### Other Comments
Thank you in advance! :)
|
closed
|
2022-01-19T13:36:18Z
|
2022-01-20T02:44:10Z
|
https://github.com/recommenders-team/recommenders/issues/1614
|
[
"bug"
] |
Seyoung9304
| 2
|
plotly/dash
|
data-visualization
| 2,916
|
[BUG] TypeError: Cannot read properties of undefined (reading 'concat') in getInputHistoryState
|
**Context**
The following code is a simplified example of the real application. The layout initially contains a root div and inputs. By using **pattern matching** and **set_props** the callback function can be setup in a general manner. Anyway, when triggering the callback by a button the root div will be populated by any other content like another div and more inputs. The newly created inputs then trigger the same callback and the layout is further updated. A nested structure.
The ID is like this:` {'type': 'ELEMENT_TYPE', 'eid': 'ELEMNET_ID, 'target': TARGET_ID'}`
- type: This is the elmenet type (Div, Button, Input, ..)
- eid: Unique element id
- target: In case the element is an input, than this is the id of the target element (optional)
In the following example works like this:
1. Button1 is clicked
2. Callback update is triggered
3. key target is checked
4. run set_props on the element with eid = target and the desired content
***failing example:***

This works for Button1 as expected. But when clicking Button2 (which was created by Button1) the mentioned TypeError occures. When inspecting the console, it can be seen, that the itempath is undefined.
```
from dash import Dash, html, dcc, ALL, callback_context, exceptions, set_props, callback
import dash_bootstrap_components as dbc
from dash.dependencies import Input, Output, State
# app
app = Dash(__name__)
# layout
app.layout = html.Div([
html.H4(['Title'], id={'type': 'H4','eid': 'title', 'target': ''})
,dbc.Button(['BTN1 (create new div)'], id={'type': 'Button','eid': 'btn1', 'target': 'content1'})
,html.Div([], id={'type': 'Div','eid': 'content1', 'target': ''})
,html.Div([], id={'type': 'Div','eid': 'content3', 'target': ''})
], id={'type': 'Div','eid': 'body', 'target': ''})
#callback
@callback(
Output({'type': 'H4', 'eid': 'title', 'target': ''}, 'children')
,State({'type': 'Div', 'eid': 'body', 'target': ''}, 'children')
,Input({'type': 'Button', 'eid': ALL, 'target': ALL}, 'n_clicks')
,Input({'type': 'Input', 'eid': ALL, 'target': ALL}, 'value')
,prevent_initial_call=True
)
def update(
H4__children
,Button__n_clicks
,Input__value
):
if callback_context.triggered_id is not None:
id_changed = callback_context.triggered_id
else:
raise exceptions.PreventUpdate
if id_changed['target'] == 'content1':
set_props(
{'type': 'Div', 'eid': id_changed.target, 'target': ''}
,{'children': [
dbc.Button(['BTN2 (write OUTPUT to new div)'], id={'type': 'Button','eid': 'btn2', 'target': 'content2'})
#dbc.Button(['BTN2 (write OUTPUT to new div)'], id={'type': 'Button','eid': 'btn2', 'target': 'content3'})
,html.Div(['div content2 from button1'], id={'type': 'Div','eid': 'content2', 'target': ''})
]}
)
elif id_changed['target'] == 'content2':
set_props({'type': 'Div', 'eid': id_changed.target, 'target': ''}, {'children': 'OUTPUT for content2 from button2'})
elif id_changed['target'] == 'content3':
set_props({'type': 'Div', 'eid': id_changed.target, 'target': ''}, {'children': 'OUTPUT for content3 from button2'})
return 'Title'
# main
if __name__ == '__main__':
app.run(debug=True, dev_tools_hot_reload=True, dev_tools_ui=True)
```
Changing the target of Button1 to content3 works (just replace the line for dbc:Button within the callback). So the itempath for Button2 is found and the callback for the new input works.
***working example:***

`pip list | grep dash`
```
dash 2.17.1
dash_ag_grid 31.2.0
dash-bootstrap-components 1.6.0
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
- OS: Win11
- Browser chrome
- Version 126
**Describe the bug**
The itempath for content2 Div is undefined and concat in getInputHistoryState fails. The itempath is undefined because the component is not present within the store. So it looks like set_props does not update the store?
```
reducer.js:57 Uncaught (in promise)
TypeError: Cannot read properties of undefined (reading 'concat')
at getInputHistoryState (reducer.js:57:36)
at reducer.js:82:34
at reducer.js:121:16
at dispatch (redux.js:288:1)
at index.js:20:1
at dispatch (redux.js:691:1)
at callbacks.ts:214:9
at index.js:16:1
at dispatch (redux.js:691:1)
at callbacks.ts:255:13
```
**Expected behavior**
The message 'OUTPUT for content2 from button2' should appear in Div with id {'type': 'Div','eid': 'content2', 'target': ''}.
|
closed
|
2024-07-08T17:25:20Z
|
2024-07-26T13:16:01Z
|
https://github.com/plotly/dash/issues/2916
|
[] |
wKollendorf
| 2
|
robotframework/robotframework
|
automation
| 5,146
|
Warning with page screen shot
|
I am seeing the warning below in console and in html report while executing robot script.
Any way to hide or disable warning?
[ WARN ] Keyword 'Capture Page Screenshot' could not be run on failure: WebDriverException: Message: [Exception... "Data conversion failed because significant data would be lost" nsresult: "0x80460003 (NS_ERROR_LOSS_OF_SIGNIFICANT_DATA)" location: "<unknown>" data: no]
I use robotframework 6.1.1 with seleniumLibrary 5.1.3
|
closed
|
2024-06-09T11:33:15Z
|
2024-06-11T16:35:16Z
|
https://github.com/robotframework/robotframework/issues/5146
|
[] |
aya-spec
| 1
|
Asabeneh/30-Days-Of-Python
|
matplotlib
| 565
|
Duplicated exercises in Day 4
|
There are some duplicated exercises in Day 4 ([30-Days-Of-Python](https://github.com/Asabeneh/30-Days-Of-Python/tree/master)/[04_Day_Strings](https://github.com/Asabeneh/30-Days-Of-Python/tree/master/04_Day_Strings)
/04_strings.md).
1. I believe exercise 23 and exercise 26 are nearly the same.
> 23. Use index or find to **find the position of the first occurrence of the word 'because' in the following sentence: 'You cannot end a sentence with because because because is a conjunction'**
> 26. **Find the position of the first occurrence of the word 'because' in the following sentence: 'You cannot end a sentence with because because because is a conjunction'**
2. Exercise 25 and 27 are the same.
|
open
|
2024-07-23T08:25:04Z
|
2024-07-24T07:00:01Z
|
https://github.com/Asabeneh/30-Days-Of-Python/issues/565
|
[] |
chienchuanw
| 1
|
s3rius/FastAPI-template
|
fastapi
| 69
|
Add psycopg support.
|
It would be super nice to have ability to generate project without ORM.
Since for highload it's really useful.
|
closed
|
2022-04-12T22:16:11Z
|
2022-04-15T20:41:15Z
|
https://github.com/s3rius/FastAPI-template/issues/69
|
[] |
s3rius
| 1
|
huggingface/datasets
|
deep-learning
| 6,912
|
Add MedImg for streaming
|
### Feature request
Host the MedImg dataset (similar to Imagenet but for biomedical images).
### Motivation
There is a clear need for biomedical image foundation models and large scale biomedical datasets that are easily streamable. This would be an excellent tool for the biomedical community.
### Your contribution
MedImg can be found [here](https://www.cuilab.cn/medimg/#).
|
open
|
2024-05-22T00:55:30Z
|
2024-09-05T16:53:54Z
|
https://github.com/huggingface/datasets/issues/6912
|
[
"dataset request"
] |
lhallee
| 8
|
FactoryBoy/factory_boy
|
sqlalchemy
| 400
|
'PostGenerationContext' object has no attribute 'items'
|
In https://github.com/FactoryBoy/factory_boy/commit/8dadbe20e845ae7e311edf2cefc4ce9e24c25370
PostGenerationContext was changed to a NamedTuple. This causes a crash in utils.log_pprint:110
AttributeError: 'PostGenerationContext' object has no attribute 'items'
PR with a quick fix here: https://github.com/FactoryBoy/factory_boy/pull/399
|
closed
|
2017-07-31T18:31:48Z
|
2017-07-31T18:34:36Z
|
https://github.com/FactoryBoy/factory_boy/issues/400
|
[] |
Fingel
| 1
|
miguelgrinberg/Flask-SocketIO
|
flask
| 1,185
|
Not working as WebSocket on Windows 7 Ultimate
|
**Your question**
I'm trying to start a Websocket server with Flask and flask-socketio. It's working as ajax polling mode (Not WebSocket Mode). But It didn't work when I installed the eventlet module (for WebSocket Support).
**Logs**
Server initialized for eventlet.
Then nothing:`(
|
closed
|
2020-02-11T07:13:40Z
|
2020-02-11T08:35:40Z
|
https://github.com/miguelgrinberg/Flask-SocketIO/issues/1185
|
[
"question"
] |
fred913
| 2
|
streamlit/streamlit
|
data-visualization
| 10,026
|
Supress multiselect "Remove an option first". Message displays immediately upon reaching max selection.
|
### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [X] I added a descriptive title and summary to this issue.
### Summary
st.multiselect displays this message immediately upon reaching n = max_selections:

For example, if max_selection = 2, the user will make 1 selection, and then a 2nd selection - this message will pop up immediately upon making the 2nd selection.
### Why?
The current state of the function makes the user feel as though their 2nd selection is raising an error.
### How?
The message should only appear if the user is attempting to make n > max_selections.
### Additional Context
_No response_
|
open
|
2024-12-16T04:00:23Z
|
2024-12-17T03:42:44Z
|
https://github.com/streamlit/streamlit/issues/10026
|
[
"type:enhancement",
"feature:st.multiselect"
] |
LarryLoveIV
| 3
|
aleju/imgaug
|
deep-learning
| 760
|
Normalizing points for polygon intersection testing.
|
According to https://github.com/ideasman42/isect_segments-bentley_ottmann/issues/4 , their implementation is best tested on points normalized between -1 and 1. I ran into an issue similar to the issue I linked, running imgaug:
```
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/imgaug/external/poly_point_isect_py2py3.py", line 351, in remove
self._events_current_sweep.remove(event)
File "/usr/lib/python3.9/site-packages/imgaug/external/poly_point_isect_py2py3.py", line 1302, in remove
raise KeyError(str(key))
KeyError: 'Event(0x7f4978101400, s0=(780.1091, 495.0222), s1=(781.7717, 493.7595), p=(780.1091, 495.0222), type=2, slope=-0.7594731143991321)'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/imgaug/augmentables/polys.py", line 2465, in _generate_intersection_points
intersections = isect_segments_include_segments(segments)
File "/usr/lib/python3.9/site-packages/imgaug/external/poly_point_isect_py2py3.py", line 621, in isect_segments_include_segments
return isect_segments_impl(segments, include_segments=True)
File "/usr/lib/python3.9/site-packages/imgaug/external/poly_point_isect_py2py3.py", line 590, in isect_segments_impl
sweep_line.handle(p, events_current)
File "/usr/lib/python3.9/site-packages/imgaug/external/poly_point_isect_py2py3.py", line 398, in handle
self.handle_event(e)
File "/usr/lib/python3.9/site-packages/imgaug/external/poly_point_isect_py2py3.py", line 438, in handle_event
if self.remove(e):
File "/usr/lib/python3.9/site-packages/imgaug/external/poly_point_isect_py2py3.py", line 360, in remove
assert(event.in_sweep == False)
AssertionError
```
Diving deeper into the issue I found a polygon with two points that were nearly identical (`722.90222168` vs `722.71356201` and `381.67370605` vs `380.05108643`). Removing one of these points prevented the code from crashing, but maybe it is better to normalize the points in the range `[-1, 1]` before using the intersection checking algorithm. I haven't tested if this too solves my problem.
Additional motivation: I assume this will also fix [this](https://github.com/aleju/imgaug/blob/master/imgaug/external/poly_point_isect_py2py3.py#L53-L55) issue.
|
open
|
2021-04-14T14:55:33Z
|
2021-04-14T14:55:33Z
|
https://github.com/aleju/imgaug/issues/760
|
[] |
hgaiser
| 0
|
allenai/allennlp
|
pytorch
| 5,171
|
No module named 'allennlp.data.tokenizers.word_splitter'
|
I'm using python 3.7 in google colab. I install allennlp=2.4.0 and allennlp-models.
When I run my code:
from allennlp.data.tokenizers.word_splitter import SpacyWordSplitter
I get this error:
ModuleNotFoundError: No module named 'allennlp.data.tokenizers.word_splitter'
help me please.
|
closed
|
2021-04-30T17:11:44Z
|
2021-05-17T16:10:36Z
|
https://github.com/allenai/allennlp/issues/5171
|
[
"question",
"stale"
] |
mitra8814
| 2
|
encode/httpx
|
asyncio
| 2,660
|
The `get_environment_proxies` function in _utils.py does not support IPv4, IPv6 correctly
|
Hi, I encountered error when my environment `no_proxy` includes IPv6 address like `::1`. It is wrongly transformed into `all://*::1` and causes urlparse error since the _urlparse.py parses the `:1` as port.

The `get_environment_proxies` function in **_utils.py** is responsible for parsing and mounting proxy info from system environment.
https://github.com/encode/httpx/blob/4b5a92e88e03443c2619f0905d756b159f9f0222/httpx/_utils.py#L229-L264
For env `no_proxy`, according to [CURLOPT_NOPROXY explained](https://curl.se/libcurl/c/CURLOPT_NOPROXY.html), it should support domains, IPv4, IPv6 and the `localhost`. However, current `get_environment_proxies` function implementation only supports domains correctly as it always adds wildcard `*` in front of the `hostname`.
To fix this issue, I looked into this repo and suggest handling the `no_proxy` hostnames as domains, IPv4, IPv6 and the `localhost` seperately. I have updated the `get_environment_proxies` function in **_utils.py** and tested it.
Refer to the PR: #2659
Replies and discussions are welcomed!
|
closed
|
2023-04-14T09:44:37Z
|
2023-04-19T12:14:39Z
|
https://github.com/encode/httpx/issues/2660
|
[] |
HearyShen
| 0
|
ivy-llc/ivy
|
numpy
| 27,862
|
Fix Frontend Failing Test: paddle - tensor.torch.Tensor.__mul__
|
closed
|
2024-01-07T22:57:02Z
|
2024-01-07T23:36:03Z
|
https://github.com/ivy-llc/ivy/issues/27862
|
[
"Sub Task"
] |
NripeshN
| 0
|
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
pytorch
| 914
|
CPU training time
|
it is taking alot of training time on CPU. Is it possible to reduce CPU training time?How long it takes to train using CPU?
|
closed
|
2020-02-07T04:55:41Z
|
2020-02-08T12:33:48Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/914
|
[] |
manvirvirk
| 3
|
CorentinJ/Real-Time-Voice-Cloning
|
python
| 852
|
mat1 and mat2 shapes cannot be multiplied (1440x512 and 320x128) TRAINING SYNTHETIZER
|
Can someone help me with this issue?
Traceback (most recent call last):
File "D:\PycharmProjects\Realtime\synthesizer_train.py", line 35, in <module>
train(**vars(args))
File "D:\PycharmProjects\Realtime\synthesizer\train.py", line 178, in train
m1_hat, m2_hat, attention, stop_pred = model(texts, mels, embeds)
File "D:\PycharmProjects\Realtime\venv\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "D:\PycharmProjects\Realtime\synthesizer\models\tacotron.py", line 387, in forward
encoder_seq_proj = self.encoder_proj(encoder_seq)
File "D:\PycharmProjects\Realtime\venv\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "D:\PycharmProjects\Realtime\venv\lib\site-packages\torch\nn\modules\linear.py", line 96, in forward
return F.linear(input, self.weight, self.bias)
File "D:\PycharmProjects\Realtime\venv\lib\site-packages\torch\nn\functional.py", line 1847, in linear
return torch._C._nn.linear(input, weight, bias)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (1440x512 and 320x128)
|
closed
|
2021-09-25T14:39:20Z
|
2021-09-27T18:22:49Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/852
|
[] |
ireneb612
| 3
|
ShishirPatil/gorilla
|
api
| 88
|
[bug] Hosted Gorilla: UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8a in position 9: invalid start byte
|
Hello,
I get this error when launching Gorilla on a Windows 10 PC:
`UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8a in position 9: invalid start byte`
It seems like the output works though.
Thank you.
|
open
|
2023-08-07T16:46:52Z
|
2023-08-08T09:43:49Z
|
https://github.com/ShishirPatil/gorilla/issues/88
|
[
"hosted-gorilla"
] |
giuliastro
| 4
|
tqdm/tqdm
|
pandas
| 1,432
|
Multi processing with args
|
- [x ] I have marked all applicable categories:
+ [x ] exception-raising bug
+ [ x] visual output bug
- [ x] I have visited the [source website], and in particular
read the [known issues]
- [ x] I have searched through the [issue tracker] for duplicates
- [ x] I have mentioned version numbers, operating system and
environment, where applicable:
```python
import tqdm, sys
print(tqdm.__version__, sys.version, sys.platform)
4.64.1 3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0] linux
```
```
Multi process
So i have a task that i use tqdm to show a progress bar, it work fine but it's long :
```python
for item in tqdm(self.items,
desc=f"Import progression for {self.__filename} {self.name}",
unit=" Item",
colour="green"):
self.db.insert_into(self.__filename, self.name, item)
```
So i want to multiprocess it (using only 1 pbar for all the work to do, so global progression not per process)
i take a look at process_map()
example from stackoverflow:
```python
from tqdm.contrib.concurrent import process_map # or thread_map
import time
def _foo(my_number):
square = my_number * my_number
time.sleep(1)
return square
if __name__ == '__main__':
r = process_map(_foo, range(0, 30), max_workers=2)
```
i'm not sure how to apply this solution to my situation, since my function take more arguments than just the iterable item.
if someone can help me put that in place it would be appreciate
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
|
open
|
2023-02-20T14:21:24Z
|
2023-03-07T18:02:40Z
|
https://github.com/tqdm/tqdm/issues/1432
|
[] |
arist0v
| 2
|
eriklindernoren/ML-From-Scratch
|
deep-learning
| 108
|
Naive Bayes
|
Shouldn't the coefficient be
coeff = 1.0 / math.pi * math.sqrt(2.0 * math.pi) + eps
In equation of normal equation the pi is outside of sqrt
|
open
|
2024-03-24T10:33:59Z
|
2024-05-12T20:33:13Z
|
https://github.com/eriklindernoren/ML-From-Scratch/issues/108
|
[] |
StevenSopilidis
| 2
|
miguelgrinberg/flasky
|
flask
| 106
|
How to apply login requirement in posts, users and comments api.
|
As you have mentioned in the book that use **@auth.login_required** to protect any resource. However, github repo does not show it being applied to resources in posts, comments and user apis in api_v1_0.
When I try to import from .authentication import auth and use @auth.login_required, it does not work. Could you please help me resolve the issue?
|
closed
|
2016-01-16T15:20:36Z
|
2016-01-16T15:51:11Z
|
https://github.com/miguelgrinberg/flasky/issues/106
|
[] |
ibbad
| 1
|
cvat-ai/cvat
|
tensorflow
| 8,886
|
"docker-compose up" got error...
|
### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
1. git clone http://192.168.1.122:5000/polonii/cvat
### Expected Behavior
_No response_
### Possible Solution
_No response_
### Context
_No response_
### Environment
_No response_
|
closed
|
2024-12-28T20:54:39Z
|
2024-12-28T20:55:02Z
|
https://github.com/cvat-ai/cvat/issues/8886
|
[
"bug"
] |
PospelovDaniil
| 0
|
rasbt/watermark
|
jupyter
| 15
|
Wrong package versions within virtualenv
|
I'm not sure how widespread this issue is, or if it's something particular to my setup, but when I use watermark to report package information (using -p) within a jupyter notebook that is running in a virtualenv, version information about system level packages are reported rather than packages installed within my environment. If a package is installed in my virtualenv and not installed at the system level, then I end up getting `DistributionNotFound` exceptions when I try to report the version number using watermark.
The problem stems from the call to `pkg_resources.get_distribution`. If I use `get_distribution` directly from within my notebook to report the version of e.g. `numpy` it shows my system level numpy info (v 1.11.0) rather than info about numpy installed within my virtualenv (v 1.11.1). For example:
> `pkg_resources.get_distribution('numpy')`
> `numpy 1.11.0 (/usr/local/lib/python2.7/site-packages)`
Similarly
> `%watermark -p numpy`
> `numpy 1.11.0`
But when I check the version of what is imported:
> `import numpy as np`
> `np.__version__`
> `'1.11.1'`
If I use `pkg_resources` from a python console or script running within a virtualenv, it reports the proper information about packages. But if I use it from an ipython console or jupyter notebook it reports system level information.
If this is a widespread issue, should `watermark` find another more robust solution to getting package information other than using `pkg_resources`?
|
closed
|
2016-08-03T17:35:17Z
|
2016-08-16T22:54:58Z
|
https://github.com/rasbt/watermark/issues/15
|
[
"bug",
"help wanted"
] |
mrbell
| 17
|
Gozargah/Marzban
|
api
| 1,544
|
Can't get config via cli
|
I'm trying to get myself a config to paste it into FoxRay, and getting the following error:

|
open
|
2024-12-28T18:54:50Z
|
2024-12-29T17:31:17Z
|
https://github.com/Gozargah/Marzban/issues/1544
|
[] |
subzero911
| 3
|
aleju/imgaug
|
machine-learning
| 178
|
Does this code use the gpu ?
|
Hi. I just found out that opencv-python doesn't actually use the gpu, even when properly compiled with cuda support on.
I'm currently actively looking for a package that does so. Does this repo expose a code that makes augmentations run on gpu ?
|
open
|
2018-09-10T15:10:03Z
|
2018-10-30T14:50:22Z
|
https://github.com/aleju/imgaug/issues/178
|
[] |
dmenig
| 2
|
ultralytics/ultralytics
|
computer-vision
| 19,547
|
onnx-inference-for-segment-predict
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
HELLO, guys, when I using the @https://github.com/ultralytics/ultralytics/tree/main/examples/YOLOv8-Segmentation-ONNXRuntime-Python demo to predict my task for paper segment, I found that the rotated instance may be unspport, when i defined the param rotated = False , it look like this  but if I make it True, it will become
 So makes me confuse is that if I use the pt model for python to predict , the result is
 correct , so anyone could help me to resolve the question
### Additional
_No response_
|
closed
|
2025-03-06T08:16:34Z
|
2025-03-15T03:40:42Z
|
https://github.com/ultralytics/ultralytics/issues/19547
|
[
"question",
"fixed",
"segment",
"exports"
] |
Keven-Don
| 29
|
sammchardy/python-binance
|
api
| 1,002
|
ModuleNotFoundError: No module named 'binance.websockets'
|
whenever i try to run the pumpbot with the terminal i get this error
anyone know how i can fix it?

|
open
|
2021-08-29T21:12:03Z
|
2022-08-05T17:28:29Z
|
https://github.com/sammchardy/python-binance/issues/1002
|
[] |
safwaanfazz
| 1
|
TheKevJames/coveralls-python
|
pytest
| 55
|
LICENSE not included in source package
|
When running ./setup.py sdist the LICENSE file is not included. I opt to add it to MANIFEST.in.
|
closed
|
2015-02-07T09:10:31Z
|
2015-02-19T06:09:56Z
|
https://github.com/TheKevJames/coveralls-python/issues/55
|
[] |
joachimmetz
| 5
|
WZMIAOMIAO/deep-learning-for-image-processing
|
deep-learning
| 185
|
这是什么问题?
|


|
closed
|
2021-03-18T08:38:23Z
|
2021-03-20T07:09:50Z
|
https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/185
|
[] |
HuKai97
| 2
|
wagtail/wagtail
|
django
| 12,062
|
StructBlock missing a joining space when displaying multiple error messages
|
### Issue Summary
When multiple errors are raised for a `StructBlock`, the error messages get joined with no separating space.
For example, given these error messages: `["one.", "two."]`, this would be displayed as follows:
* For a model field: `one. two.` (good 👍🏻)
* For a StructBlock: `one.two.` (not good 👎🏻)
### Steps to Reproduce
Raise multiple `ValidationError`s for a `StructBlock`.
- I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: yes
### Technical details
- Python version: Not relevant for this issue.
- Django version: Not relevant for this issue.
- Wagtail version: 6.1.2
- Browser version: Not relevant for this issue.
|
closed
|
2024-06-18T14:56:13Z
|
2024-06-20T18:30:59Z
|
https://github.com/wagtail/wagtail/issues/12062
|
[
"type:Bug",
"status:Unconfirmed"
] |
kbayliss
| 0
|
scikit-learn/scikit-learn
|
machine-learning
| 30,525
|
OPTICS.fit leaks memory when called under VS Code's built-in debugger
|
### Describe the bug
Running clustering algorithm with n_jobs parameter set to more than 1 thread causes memory leak each time algorithm is run.
This simple code causes additional memory leak at each loop cycle. The issue will not occur if i replace manifold reduction algorithm with precomputed features.
### Steps/Code to Reproduce
```python
import gc
import numpy as np
from sklearn.manifold import TSNE
from sklearn.cluster import OPTICS
import psutil
process = psutil.Process()
def main():
data = np.random.random((100, 100))
for _i in range(1, 50):
points = TSNE().fit_transform(data)
prediction = OPTICS(n_jobs=2).fit_predict(points) # n_jobs!=1
points = None
prediction = None
del prediction
del points
gc.collect()
print(f"{process.memory_info().rss / 1e6:.1f} MB")
main()
```
### Expected Results
Program's memory usage nearly constant between loop cycles
### Actual Results
Program's memory usage increases infinitely
### Versions
```shell
System:
python: 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
executable: .venv\Scripts\python.exe
machine: Windows-10-10.0.26100-SP0
Python dependencies:
sklearn: 1.6.0
pip: 24.3.1
setuptools: 63.2.0
numpy: 1.25.2
scipy: 1.14.1
Cython: None
pandas: 2.2.3
matplotlib: 3.10.0
joblib: 1.4.2
threadpoolctl: 3.5.0
Built with OpenMP: True
threadpoolctl info:
user_api: openmp
internal_api: openmp
num_threads: 16
prefix: vcomp
filepath: .venv\Lib\site-packages\sklearn\.libs\vcomp140.dll
version: None
user_api: blas
internal_api: openblas
num_threads: 16
prefix: libopenblas
filepath: .venv\Lib\site-packages\numpy\.libs\libopenblas64__v0.3.23-246-g3d31191b-gcc_10_3_0.dll
version: 0.3.23.dev
threading_layer: pthreads
architecture: Cooperlake
user_api: blas
internal_api: openblas
num_threads: 16
prefix: libscipy_openblas
filepath: .venv\Lib\site-packages\scipy.libs\libscipy_openblas-5b1ec8b915dfb81d11cebc0788069d2d.dll
version: 0.3.27.dev
threading_layer: pthreads
architecture: Cooperlake
```
|
open
|
2024-12-21T15:50:53Z
|
2024-12-31T14:12:54Z
|
https://github.com/scikit-learn/scikit-learn/issues/30525
|
[
"Bug",
"Performance",
"Needs Investigation"
] |
Probelp
| 18
|
flairNLP/flair
|
pytorch
| 3,036
|
Support for Vietnamese
|
Hi, I am looking through Flair and wondering if it support Vietnamese or not. If not, will it in the future? Thank you!
_Originally posted by @longsc2603 in https://github.com/flairNLP/flair/issues/2#issuecomment-1354413764_
|
closed
|
2022-12-21T01:37:57Z
|
2023-06-11T11:25:47Z
|
https://github.com/flairNLP/flair/issues/3036
|
[
"wontfix"
] |
longsc2603
| 1
|
Zeyi-Lin/HivisionIDPhotos
|
machine-learning
| 211
|
onnxruntime error
|
Hi, I am using slurm and I get the following issue:
```
[E:onnxruntime:Default, env.cc:234 ThreadMain] pthread_setaffinity_np failed for thread: 3131773, index: 22, mask: {23, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.#[m
#[1;31m2024-11-22 13:59:58.924148953
```
What can I do?
Thanks in advance.
|
open
|
2024-11-22T13:36:11Z
|
2024-11-22T13:36:11Z
|
https://github.com/Zeyi-Lin/HivisionIDPhotos/issues/211
|
[] |
gebaltso
| 0
|
babysor/MockingBird
|
deep-learning
| 616
|
MacBook在运行python demo_toolbox.py -d .\samples时报错
|
实际我是有安装PyQt5的,网上说了PyQt@5就是PyQt5。
<img width="491" alt="图片" src="https://user-images.githubusercontent.com/22427032/173240261-25972477-abf0-4017-a8bc-f40b8c7b423e.png">
<img width="491" alt="图片" src="https://user-images.githubusercontent.com/22427032/173240295-61d2ec48-a9a2-4fca-a336-3acf2b58df98.png">
|
open
|
2022-06-12T15:24:45Z
|
2022-07-16T10:56:35Z
|
https://github.com/babysor/MockingBird/issues/616
|
[] |
iOS-Kel
| 3
|
hootnot/oanda-api-v20
|
rest-api
| 45
|
Version 0.2.1
|
- [x] fix missing requirement
- [x] fix examples: candle-data.py
|
closed
|
2016-11-15T18:44:19Z
|
2016-11-15T19:03:01Z
|
https://github.com/hootnot/oanda-api-v20/issues/45
|
[
"Release"
] |
hootnot
| 0
|
polakowo/vectorbt
|
data-visualization
| 478
|
Unable to move stop-loss in profit
|
Hi!
I'm using `Portfolio.from_signals` and trying to set up a moving stop-loss using supertrend. So for long position I want my stop-loss to move up with supertrend. I use `adjust_sl_func_nb` for that and everything works great untill the stop-loss moves in profit. There's a condition in `get_stop_price_nb` which does not allow to have a stop-loss in profit for some reason.
Just for the sake of experiment I commented it out and the strategy worked like a charm. So why is that check in there? And how can I move my SL in profit with `adjust_sl_func_nb`?
Thanks!
|
open
|
2022-07-31T22:00:57Z
|
2022-10-01T08:24:33Z
|
https://github.com/polakowo/vectorbt/issues/478
|
[] |
tossha
| 1
|
reloadware/reloadium
|
django
| 183
|
pydevd_process_net_command.py this file frequently wrong as UnicodeDecodeError
|
def _on_run(self):
read_buffer = ""
try:
while not self.killReceived:
try:
r = self.sock.recv(1024)
except:
if not self.killReceived:
traceback.print_exc()
self.handle_except()
return #Finished communication.
#Note: the java backend is always expected to pass utf-8 encoded strings. We now work with unicode
#internally and thus, we may need to convert to the actual encoding where needed (i.e.: filenames
#on python 2 may need to be converted to the filesystem encoding).
if hasattr(r, 'decode'):
r = r.decode('utf-8')
read_buffer += r
if DebugInfoHolder.DEBUG_RECORD_SOCKET_READS:
sys.stderr.write(u'debugger: received >>%s<<\n' % (read_buffer,))
sys.stderr.flush()
if len(read_buffer) == 0:
self.handle_except()
break
while read_buffer.find(u'\n') != -1:
command, read_buffer = read_buffer.split(u'\n', 1)
args = command.split(u'\t', 2)
try:
cmd_id = int(args[0])
pydev_log.debug('Received command: %s %s\n' % (ID_TO_MEANING.get(str(cmd_id), '???'), command,))
self.process_command(cmd_id, int(args[1]), args[2])
except:
traceback.print_exc()
sys.stderr.write("Can't process net command: %s\n" % command)
sys.stderr.flush()
except:
traceback.print_exc()
self.handle_except()
When starting debugging frequently wrong show :
r = r.decode('utf-8')
^^^^^^^^^^^^^^^^^
UnicodeDecodeError: 'utf-8' codec can't decode bytes in position 1022-1023: unexpected end of data
|
open
|
2024-02-26T17:26:45Z
|
2024-02-26T17:26:45Z
|
https://github.com/reloadware/reloadium/issues/183
|
[] |
fangplu
| 0
|
httpie/cli
|
python
| 1,567
|
Online doc error
|
https://httpie.io/docs/cli/non-string-json-fields
> hobbies:='["http", "pies"]' \ # Raw JSON — Array
In my test (PyPI ver.), it should be
```
hobbies='["http", "pies"]'
```
instead of
```
hobbies:='["http", "pies"]'
```
|
closed
|
2024-03-05T03:45:23Z
|
2024-03-06T06:08:44Z
|
https://github.com/httpie/cli/issues/1567
|
[
"new"
] |
XizumiK
| 3
|
pyeventsourcing/eventsourcing
|
sqlalchemy
| 163
|
Question: inject services into aggregates.
|
Hello
I am new to your library.
I want to create event sourced aggregate with service injected into the constructor of aggregates.
Say I want to create an aggregate that handles a command that contains a password.
I need to hash the password before constructing the event.
Something like this:
```python
class Registration(AggregateRoot):
def __init__(self, *args, **kwargs):
# injected
self._id = kwargs.pop('id')
self._hash_password = kwargs.pop('hash_password')
super().__init__(*args, **kwargs)
# aggregate state
self._hashed_password = None
def set_password(self, **kwargs):
if self._hashed_password:
raise InvalidOperation
self.__trigger_event__(
PasswordSet,
**PasswordSet.validate({
'id': self._id,
'hashed_password': self._hash_password(kwargs.pop('password')),
})
)
def on_event(self, event):
if isinstance(event, PasswordSet):
self._hashed_password = event.hashed_password
```
as you can see here I inject "hash_password" service which is actually a python method.
However, when I try to construct the aggregate.
```python
test_registration_aggregate = Registration.__create__(
id='abcd',
hash_password=lambda x: x,
)
```
I got the error
```
Traceback (most recent call last):
File "/home/runner/app/user_registrations/tests/practice.py", line 20, in setUp
hash_password=lambda: 'new_password',
File "/home/runner/.local/share/virtualenvs/app-i5Lb5gVx/lib/python3.7/site-packages/eventsourcing/domain/model/entity.py", line 229, in __create__
return super(EntityWithHashchain, cls).__create__(*args, **kwargs)
File "/home/runner/.local/share/virtualenvs/app-i5Lb5gVx/lib/python3.7/site-packages/eventsourcing/domain/model/entity.py", line 63, in __create__
**kwargs
File "/home/runner/.local/share/virtualenvs/app-i5Lb5gVx/lib/python3.7/site-packages/eventsourcing/domain/model/events.py", line 167, in __init__
self.__dict__['__event_hash__'] = self.__hash_object__(self.__dict__)
File "/home/runner/.local/share/virtualenvs/app-i5Lb5gVx/lib/python3.7/site-packages/eventsourcing/domain/model/events.py", line 145, in __hash_object__
return hash_object(cls.__json_encoder_class__, obj)
File "/home/runner/.local/share/virtualenvs/app-i5Lb5gVx/lib/python3.7/site-packages/eventsourcing/utils/hashing.py", line 12, in hash_object
cls=json_encoder_class,
File "/home/runner/.local/share/virtualenvs/app-i5Lb5gVx/lib/python3.7/site-packages/eventsourcing/utils/transcoding.py", line 133, in json_dumps
cls=cls,
File "/usr/local/lib/python3.7/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/usr/local/lib/python3.7/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/home/runner/.local/share/virtualenvs/app-i5Lb5gVx/lib/python3.7/site-packages/eventsourcing/utils/transcoding.py", line 23, in iterencode
return super(ObjectJSONEncoder, self).iterencode(o, _one_shot=_one_shot)
File "/usr/local/lib/python3.7/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/home/runner/.local/share/virtualenvs/app-i5Lb5gVx/lib/python3.7/site-packages/eventsourcing/utils/transcoding.py", line 65, in default
return JSONEncoder.default(self, obj)
File "/usr/local/lib/python3.7/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type builtin_function_or_method is not JSON serializable
```
So I assumed that whatever passed into `__create__` should be serializable
but how else do I pass domain services into aggregates.
|
closed
|
2018-10-17T07:57:03Z
|
2018-11-01T02:01:01Z
|
https://github.com/pyeventsourcing/eventsourcing/issues/163
|
[] |
midnight-wonderer
| 4
|
thp/urlwatch
|
automation
| 58
|
Automatically cleaning up cached content
|
Does urlwatch automatically clean up its cache? i.e. keep only the latest version of a page and delete any old versions.
|
closed
|
2016-03-11T11:39:19Z
|
2016-03-12T12:34:41Z
|
https://github.com/thp/urlwatch/issues/58
|
[] |
Immortalin
| 7
|
polakowo/vectorbt
|
data-visualization
| 313
|
Internal numba list grows with each iteration by ~10mb
|
Hi,
I have noticed that if I call `from_signals`/`from_random_signals` inside a loop, some internal numba list grows with each iteration by ~10mb. If you have a large loop, then this fills up your memory quite quickly.
Here is an example which shows this issue:
```Python
import vectorbt as vbt
import numpy as np
from pympler import muppy, summary
import gc
price = vbt.YFData.download('BTC-USD').get('Close')
symbols = ["BTC-USD", "ETH-USD"]
price = vbt.YFData.download(symbols, missing_index='drop').get('Close')
for i in range(10):
n = np.random.randint(10, 101, size=1000).tolist()
portfolio = vbt.Portfolio.from_random_signals(price, n=n, init_cash=100, seed=42)
del n, portfolio
gc.collect()
all_objects = muppy.get_objects()
sum1 = summary.summarize(all_objects)
summary.print_(sum1)
# uncomment if you want to see the content of that list
#list_obj = [ao for ao in all_objects if isinstance(ao, list)]
#
#for d in list_obj:
# print(d)
# print(len(d))
```
You will see in the summary which is printed in each iteration, that there is an object of type `list` which grows with each iteration.
Any idea how to solve that issue?
Thank you very much! And nice project btw!
|
closed
|
2021-12-30T11:09:28Z
|
2022-01-02T13:55:29Z
|
https://github.com/polakowo/vectorbt/issues/313
|
[] |
FalsePositiv3
| 11
|
abhiTronix/vidgear
|
dash
| 137
|
Framerate < 1 fps, display of out of date frames, in example code "Using NetGear_Async with Variable Parameters"
|
## Description
The client for the example [Using NetGear_Async with Variable Parameters](https://abhitronix.github.io/vidgear/gears/netgear_async/usage/) does:
1. display the first frame from the server on connection rather than the most recent
2. display frames updating at a frame rate < 1 per second
### Acknowledgment
<!--- By posting an issue you acknowledge the following: (Put an `x` in all the boxes that apply(important)) -->
- [x] I have searched the [issues](https://github.com/abhiTronix/vidgear/issues) for my issue and found nothing related or helpful.
- [x] I have read the [Documentation](https://abhitronix.github.io/vidgear).
- [x] I have read the [Issue Guidelines](https://abhitronix.github.io/contribution/issue/).
### Environment
Server:
Python 3.8
Raspberry Pi 4 4Gb
Current Raspbian
Current test branch of vidgear installed today (however issue occurred with regular pip3 branch too)
Running on a local network, (other video-streaming approaches do work at expected frame rates, seemingly implying it's not the network)
Client:
Mac OS Catalina
Python 3.8
Current pip3 install vidgear[asyncio]
### Expected Behavior
Whether the server or the client starts first, the client will display the latest frame from the server with reasonable frame rate (eg, > 5 frames per second).
### Actual Behavior
When the server starts first, then when the client connects, it starts showing frames one by one from when the server started at an extremely low rate (frame rate < 1 per second).
When the client starts first, and then the server starts, the client does start with the servers first (current) frame, but the frame rate on the client is so slow as to quickly get behind (< 1 frame per second)
### Possible Fix
Maybe there is config required to make the client/server ignore past frames, and start from the most recent frame when they connect?
Also, this warning prints out when the server starts:
`CamGear :: WARNING :: Threaded Queue Mode is disabled for the current video source!`
If I wave my arms like a madman while starting the server, and then start the client, I can see my arms waving in super super slow motion. That indicates to me that the server is in fact taking frames at the right frame rate (or else the arm wave would be jerkier), and the issue may be more with the client side, or in the transport of frames.
### Steps to reproduce
1. On the server, make sure to get the test branch of vidgear[asyncio] as described here: https://abhitronix.github.io/vidgear/installation/source_install/#installation
2. On the server, setup the example code "Using NetGear_Async with Variable Parameters" and run it:
```
# import libraries
from vidgear.gears.asyncio import NetGear_Async
import asyncio
#initialize Server with suitable source
server=NetGear_Async(source=0, address='192.168.1.14', port='5454', protocol='tcp', pattern=3, logging=True).launch()
if __name__ == '__main__':
#set event loop
asyncio.set_event_loop(server.loop)
try:
#run your main function task until it is complete
server.loop.run_until_complete(server.task)
except KeyboardInterrupt:
#wait for keyboard interrupt
pass
finally:
# finally close the server
server.close()
```
3. On the client, setup the code from the same example and run it:
```
# import libraries
from vidgear.gears.asyncio import NetGear_Async
import cv2, asyncio
#define and launch Client with `receive_mode=True`. #change following IP address '192.168.x.xxx' with yours
client=NetGear_Async(address='192.168.1.14', port='5454', protocol='tcp', pattern=3, receive_mode=True, logging=True).launch()
#Create a async function where you want to show/manipulate your received frames
async def main():
# loop over Client's Asynchronous Frame Generator
async for frame in client.recv_generator():
# do something with received frames here
# Show output window
cv2.imshow("Output Frame", frame)
key=cv2.waitKey(1) & 0xFF
#await before continuing
await asyncio.sleep(0.00001)
if __name__ == '__main__':
#Set event loop to client's
asyncio.set_event_loop(client.loop)
try:
#run your main function task until it is complete
client.loop.run_until_complete(main())
except KeyboardInterrupt:
#wait for keyboard interrupt
pass
# close all output window
cv2.destroyAllWindows()
# safely close client
client.close()
```
|
closed
|
2020-06-13T19:10:22Z
|
2020-06-25T00:54:05Z
|
https://github.com/abhiTronix/vidgear/issues/137
|
[
"QUESTION :question:",
"SOLVED :checkered_flag:"
] |
whogben
| 11
|
InstaPy/InstaPy
|
automation
| 5,842
|
Index Error
|
When I try to run my code it seems to go fine for a while (around 40mins), and then I get this error:
`Traceback (most recent call last):
File "/Users/rafael/Documents/Projects/InstaPySimon/InstagramSimonSetup.py", line 92, in <module>
session.unfollow_users(amount=500, instapy_followed_enabled=True, instapy_followed_param="all", style="RANDOM", unfollow_after=12 * 60 * 60, sleep_delay=501)
File "/Users/rafael/.pyenv/versions/3.8.1/lib/python3.8/site-packages/instapy/instapy.py", line 2001, in like_by_tags
success = process_comments(
File "/Users/rafael/.pyenv/versions/3.8.1/lib/python3.8/site-packages/instapy/comment_util.py", line 456, in process_comments
comment_state, msg = comment_image(
File "/Users/rafael/.pyenv/versions/3.8.1/lib/python3.8/site-packages/instapy/comment_util.py", line 67, in comment_image
rand_comment = random.choice(comments).format(username)
IndexError: Replacement index 1 out of range for positional args tuple`
## InstaPy configuration
` # FOLLOW
session.set_do_follow(enabled=True, percentage=80, times=1)
session.unfollow_users(amount=500, instapy_followed_enabled=True, instapy_followed_param="all", style="RANDOM", unfollow_after=12 * 60 * 60, sleep_delay=501)
`
I tried RANDOM and FIFO but it produces the same error...
|
closed
|
2020-10-23T11:02:15Z
|
2020-12-09T00:54:45Z
|
https://github.com/InstaPy/InstaPy/issues/5842
|
[
"wontfix"
] |
rafo
| 2
|
erdewit/ib_insync
|
asyncio
| 610
|
ib.portfolio() request for specific subaccount fails?
|
Thanks for the great library @erdewit.
I am trying to load the portfolio for a specific sub-account.
I tried using:
`ib.portfolio()`
it returns a `[]` as the first portfolio is empty.
I checked the source for `ib.portfolio()` and modified it to take in a account argument.
```
def portfolio(self, account) -> List[PortfolioItem]:
"""List of portfolio items of the default account."""
#account = self.wrapper.accounts[0]
return [v for v in self.wrapper.portfolio[account].values()]
```
this didn't work either.
The positions() code works with a subaccount specified. Am I overlooking something here?
I needed to update the ib_insync. works now.
Please close.
|
closed
|
2023-06-29T12:26:44Z
|
2023-06-29T13:30:16Z
|
https://github.com/erdewit/ib_insync/issues/610
|
[] |
SidGoy
| 0
|
plotly/dash
|
dash
| 3,062
|
Improve Dependency Management by removing packages not needed at runtime
|
Dash runtime requirements inlcude some packages that are not needed at runtime.
See [requirements/install.txt](https://github.com/plotly/dash/blob/0d9cd2c2a611e1b8cce21d1d46b69234c20bdb11/requirements/install.txt)
**Is your feature request related to a problem? Please describe.**
Working in an enterprise setting there are strict requirements ragarding deploying secure software. Reducing the attack surface by installing only essential packages is key. As of now, dash requires some packages to be installed in the runtime environment which are not needed to run the app at all or not in particular / newer python versions.
**Describe the solution you'd like**
1. Leverage PEP-518 which allows to remove `setuptools` as a runtime dependency and add it as a build time dependency.
2. `importlib_metadata` is sparsely used. Depending on the python version and features needed for this package, it is not required at all and can be replaced with `importlib.metdata` which is inlcuded in the python stanrdard lib (at least for >3.8). Require it only for older python versions. You can handle if the version from the standard-lib or the installed packages should be used by checking the python version when the packages are imported. Add e.g. `importlib-metadata ; python_version < 3.9` to the respective requirements file.
```python
import sys
if sys.version_info >= (3, 8):
from importlib.metadata import ...
else:
from importlib_metadata import ...
```
3. I am pretty sure that the `typing_extensions` package is not needed for newer python versions (>=3.10). If you do not leverage runtime type checking you can make it optional. For newer python versions the types can be imported from the `typing` package. Additionally, you can leverage the `typing.TYPE_CHECKING` constant. Again, require it only for older python versions and check the python version before importing the package.
**Describe alternatives you've considered**
No
|
open
|
2024-11-06T07:36:03Z
|
2024-11-11T14:44:18Z
|
https://github.com/plotly/dash/issues/3062
|
[
"feature",
"P2"
] |
waldemarmeier
| 1
|
keras-team/autokeras
|
tensorflow
| 928
|
Loss starts from initial value every epoch for structured classifier
|
### Bug Description
Loss starts from initial value every epoch for structured classifier:
> Train for 16 steps, validate for 4 steps
> 1/16 [>.............................] - ETA: 1s - loss: 2.3992 - accuracy: 0.8438
> 4/16 [======>.......................] - ETA: 0s - loss: 2.6361 - accuracy: 0.8281
> 7/16 [============>.................] - ETA: 0s - loss: 2.6707 - accuracy: 0.8259
> 10/16 [=================>............] - ETA: 0s - loss: 2.4452 - accuracy: 0.8406
> 13/16 [=======================>......] - ETA: 0s - loss: 2.5466 - accuracy: 0.8341
> 16/16 [==============================] - 1s 36ms/step - loss: 2.5186 - accuracy: 0.8347 - val_loss: 2.2202 - val_accuracy: 0.8560
> Epoch 2/1000
> 1/16 [>.............................] - ETA: 0s - loss: 2.3992 - accuracy: 0.8438
> 4/16 [======>.......................] - ETA: 0s - loss: 2.6361 - accuracy: 0.8281
> 7/16 [============>.................] - ETA: 0s - loss: 2.6707 - accuracy: 0.8259
> 10/16 [=================>............] - ETA: 0s - loss: 2.4452 - accuracy: 0.8406
> 13/16 [=======================>......] - ETA: 0s - loss: 2.5466 - accuracy: 0.8341
> 16/16 [==============================] - 0s 28ms/step - loss: 2.5186 - accuracy: 0.8347 - val_loss: 2.2202 - val_accuracy: 0.8560
> Epoch 3/1000
> 1/16 [>.............................] - ETA: 0s - loss: 2.3992 - accuracy: 0.8438
> 4/16 [======>.......................] - ETA: 0s - loss: 2.6361 - accuracy: 0.8281
> 7/16 [============>.................] - ETA: 0s - loss: 2.6707 - accuracy: 0.8259
> 10/16 [=================>............] - ETA: 0s - loss: 2.4452 - accuracy: 0.8406
> 13/16 [=======================>......] - ETA: 0s - loss: 2.5466 - accuracy: 0.8341
> 16/16 [==============================] - 0s 28ms/step - loss: 2.5186 - accuracy: 0.8347 - val_loss: 2.2202 - val_accuracy: 0.8560
> Epoch 4/1000
> 1/16 [>.............................] - ETA: 0s - loss: 2.3992 - accuracy: 0.8438
> 4/16 [======>.......................] - ETA: 0s - loss: 2.6361 - accuracy: 0.8281
> 7/16 [============>.................] - ETA: 0s - loss: 2.6707 - accuracy: 0.8259
> 10/16 [=================>............] - ETA: 0s - loss: 2.4452 - accuracy: 0.8406
> 13/16 [=======================>......] - ETA: 0s - loss: 2.5466 - accuracy: 0.8341
> 16/16 [==============================] - 0s 27ms/step - loss: 2.5186 - accuracy: 0.8347 - val_loss: 2.2202 - val_accuracy: 0.8560
> Epoch 5/1000
### Reproducing Steps
Run titanic example and set verbose to 1
### Expected Behavior
Decreasing loss for every epoch, approaching some value
### Setup Details
Include the details about the versions of:
- OS type and version:
- Python: 3.7
- autokeras: 1.0.0
- tensorflow: '2.1.0'
### Additional context
Code to reproduce:
```
import autokeras as ak
import tensorflow as tf
import pandas as pd
if __name__ == '__main__':
print(f"Tensorflow version: {tf.__version__}")
x_train = pd.read_csv('titanic/train.csv')
print(type(x_train)) # pandas.DataFrame
y_train = x_train.pop('survived')
print(type(y_train)) # pandas.Series
# Preparing testing data.
x_test = pd.read_csv('titanic/eval.csv')
y_test = x_test.pop('survived')
# Initialize the structured data classifier.
clf = ak.StructuredDataClassifier(max_trials=20) # It tries 20 different models.
# Feed the tensorflow Dataset to the classifier.
clf.fit(x_train, y_train, use_multiprocessing=False, batch_size=32, validation_split=0.2)
# Predict with the best model.
predicted_y = clf.predict(x_test)
# Evaluate the best model with testing data.
print(clf.evaluate(x_test, y_test))
```
|
closed
|
2020-01-25T16:22:16Z
|
2021-11-12T08:13:17Z
|
https://github.com/keras-team/autokeras/issues/928
|
[
"bug report",
"wontfix"
] |
SirJohnFranklin
| 2
|
PaddlePaddle/ERNIE
|
nlp
| 718
|
Can't open VCR_resnet101_faster_rcnn_genome_pickle2.lmdb: ValueError: unsupported pickle protocol: 3
|
I want to run the VCR inference(run_inference.sh) in python 2.7, however, this error occurred:
```
Traceback (most recent call last):
File "finetune.py", line 868, in <module>
main(args)
File "finetune.py", line 802, in main
graph_vars=model_outputs)
File "finetune.py", line 436, in predict_wrapper
epoch=args.epoch)
File "/home/ywjang/ERNIE-repro/ernie-vil/reader/vcr_finetuning.py", line 414, in __init__
ImageFeaturesH5Reader(task_conf['feature_lmdb_path'])
File "/home/ywjang/ERNIE-repro/ernie-vil/reader/_image_features_reader.py", line 34, in __init__
self._image_ids = pickle.loads(txn.get('keys'.encode()))
ValueError: unsupported pickle protocol: 3
```
feature_path is `u'./data/vcr/VCR_resnet101_faster_rcnn_genome_pickle2.lmdb')`,
And, the error `ValueError: unsupported pickle protocol: 3` occur usually with pickle protocol version, so I tried to convert the protocol of lmdb(mdb) files:
```
pickle.dump(pickle.load(open('data.mdb', 'rb')), open('data2.mdb', 'wb'), protocol=2)
```
And now, another error occurred:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
_pickle.UnpicklingError: invalid load key, '\x00'.
```
Is this github code working?
p.s. in python 3.5, another error occurred:
```
use gt featurre
Traceback (most recent call last):
File "finetune.py", line 868, in <module>
main(args)
File "finetune.py", line 802, in main
graph_vars=model_outputs)
File "finetune.py", line 436, in predict_wrapper
epoch=args.epoch)
File "/home/ywjang/ERNIE-repro/ernie-vil/reader/vcr_finetuning.py", line 424, in __init__
epoch, is_test, feature_reader_dict, random_seed, task_index, len(task_conf_group)))
File "/home/ywjang/ERNIE-repro/ernie-vil/reader/vcr_finetuning.py", line 186, in __init__
self.tokenize()
File "/home/ywjang/ERNIE-repro/ernie-vil/reader/vcr_finetuning.py", line 255, in tokenize
q_str = " ".join(tokens_a)
TypeError: sequence item 0: expected str instance, bytes found
```
|
closed
|
2021-07-19T14:03:20Z
|
2021-10-05T00:52:18Z
|
https://github.com/PaddlePaddle/ERNIE/issues/718
|
[
"wontfix"
] |
greeksharifa
| 2
|
mljar/mercury
|
data-visualization
| 59
|
create rich demo
|
The rich package works in the notebook - it can be a nice demo with it.
https://github.com/Textualize/rich
|
closed
|
2022-03-10T13:33:52Z
|
2023-02-15T10:06:06Z
|
https://github.com/mljar/mercury/issues/59
|
[] |
pplonski
| 0
|
miguelgrinberg/Flask-Migrate
|
flask
| 291
|
sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) FOREIGN KEY constraint failed
|
What I am trying to do here is,
1. Add a column to the existing table that has constraints
2. Try to delete the recently added column
I am using flask-sqlalchemy for SQLite
[https://github.com/bstinsonmhk/duffy/commit/f3a31ecee236a10f2c694458975569fb4019be7b](https://github.com/bstinsonmhk/duffy/commit/f3a31ecee236a10f2c694458975569fb4019be7b)
I followed this for naming convention and render_as_batch.
when I run `flask db upgrade` I get this,
> sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) FOREIGN KEY constraint failed
[SQL:
DROP TABLE "table"]
(Background on this error at: http://sqlalche.me/e/gkpj)
I am not sure what I am missing and I am not finding example
|
closed
|
2019-09-17T18:08:54Z
|
2020-01-19T18:47:20Z
|
https://github.com/miguelgrinberg/Flask-Migrate/issues/291
|
[
"question"
] |
shivarajalagond
| 4
|
babysor/MockingBird
|
pytorch
| 826
|
训练时出现问题
|
使用nvidia官方pytorch23.01 docker镜像进行训练时,经常出现Floating point exception然后中断训练
python版本3.8.10

使用win11系统训练时会出现训练完一个epoch后卡住,不继续训练,任务管理器cuda占用降到0,ctrl+c无法结束,只能关掉cmd窗口重新打开,python版本3.9.0,显卡为RTX 4090,cuda11.3
|
closed
|
2023-02-12T15:44:15Z
|
2023-02-18T02:56:12Z
|
https://github.com/babysor/MockingBird/issues/826
|
[] |
YuuLuo
| 5
|
viewflow/viewflow
|
django
| 83
|
Cannot assign "<Process: <foo/bar/None> - NEW>": "Process" instance isn't saved in the database.
|
I was going though the documented hello word example, and created a lilttle accommodation booking system as a proof of concept and ran into this issue.
I added my entire [code example](https://www.dropbox.com/s/r1lwkgag0l7xcq9/flow.zip?dl=1).
The problem starts pops up, when I want to start a new process.
models.py
```
class Accommodation(TitleSlugDescriptionModel):
email = models.EmailField(_('email'))
class AccommodationProcess(Process):
accommodation = models.ForeignKey('Accommodation', null=True)
check_in = models.DateTimeField(_('check-in'), null=True)
nights = models.PositiveSmallIntegerField(_('number of nights'), null=True,
validators=[MinValueValidator(1)])
rooms = models.PositiveSmallIntegerField(_('number of rooms'), null=True,
validators=[MinValueValidator(1)])
price = models.PositiveSmallIntegerField(_('price'), help_text=_('per night'), null=True)
is_required = models.BooleanField(_('accommodation required'), default=False)
is_confirmed = models.BooleanField(_('booking is confirmed'), default=False)
```
flows.py
```
class AccommodationFlow(Flow):
process_cls = models.AccommodationProcess
lock_impl = lock.select_for_update_lock
start = flow.Start(StartProcessView, fields=['is_requied']) \
.Permission(auto_create=True) \
.Next(this.check_requirement)
check_requirement = flow.If(cond=lambda p: p.is_required) \
.OnTrue(this.find_accommodation) \
.OnFalse(this.end)
find_accommodation = flow.View(ProcessView, fields=['accommodation']) \
.Permission(auto_create=True) \
.Next(this.check_accommodation)
check_accommodation = flow.If(cond=lambda p: p.accommodation is not None) \
.OnTrue(this.create_booking) \
.OnFalse(this.add_accommodation)
create_booking = flow.View(ProcessView, fields=['accommodation', 'checkin', 'nights', 'rooms']) \
.Permission(auto_create=True) \
.Next(this.send_booking)
add_accommodation = flow.View(ProcessView, fields='accommodation') \
.Permission(auto_create=True) \
.Next(this.create_booking)
send_booking = flow.Function(func=tasks.send_booking_request) \
.Next(this.confirm_booking)
confirm_booking = flow.View(ProcessView, fields=['is_confirmed', 'price_per_night']) \
.Next(this.check_booking_confirmation)
check_booking_confirmation = flow.If(cond=lambda p: p.is_confirmed) \
.OnTrue(this.end) \
.OnFalse(this.find_accommodation)
end = flow.End()
```
|
closed
|
2015-04-29T12:33:35Z
|
2015-07-31T08:24:34Z
|
https://github.com/viewflow/viewflow/issues/83
|
[
"request/bug"
] |
codingjoe
| 2
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
deep-learning
| 1,469
|
The ONNX network's output '625' dimensions should be non-negative
|
Hello,
I am trying to convert a cyclegan model to onnx.
I referred [this answer](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1113#issuecomment-728681741) for onnx importing, and it worked without errors.
But the thing is, when I try to import the onnx file to the onnx runtime(Snap Lens Studio), I got this error below
`Resource import for /Users/youjin/Downloads/latest_net_G (1).onnx failed: The ONNX network's output '625' dimensions should be non-negative`

When I see the structure of the model, the '625' is the output node, and the dimension is unknown, which is considered as '-1'.
I wonder where I can find the output node and if anyone has an idea to set it to the right dimension.
|
closed
|
2022-08-11T19:09:05Z
|
2025-01-05T14:46:48Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1469
|
[] |
youjin-c
| 5
|
plotly/dash
|
flask
| 2,511
|
provide a command-line interface for Dash
|
Thanks so much for your interest in Dash!
Before posting an issue here, please check the Dash [community forum](https://community.plotly.com/c/dash) to see if the topic has already been discussed. The community forum is also great for implementation questions. When in doubt, please feel free to just post the issue here :)
**Is your feature request related to a problem? Please describe.**
I see that some ideas are related to this sentiment of Dash needing a CLI. Some people have suggested cookiecutter. I think Dash needs a cli created in a directory `cli/`. Some commands I think would be great off the bat.
`dash create-app <project-name>` - create a project with certain directories, including a venv directory
`dash run --dev` - run a dash app with or without debug mode
Possibly even add a command to run in Docker
**Describe the solution you'd like**
A directory that is dedicated to the `cli/` that contains all of the logic and the commands needed
**Describe alternatives you've considered**
I've considered creating a separate repo for this, but it makes the project sloppy and updates can get out of sync. I've also seen cookiecutter as a workaround, but for similar reasons, I think a dedicated cli that is in the repo is a better alternative. Even if that cli uses cookiecutter under the hood.
|
open
|
2023-04-22T14:07:06Z
|
2024-08-13T19:31:39Z
|
https://github.com/plotly/dash/issues/2511
|
[
"feature",
"P3"
] |
erdos2n
| 0
|
youfou/wxpy
|
api
| 30
|
如何移除指定的用户
|
我想@机器人 移除 某用户,这样该怎么写?
我写了一半如下:
@bot.register(teamgroup)
def remove_msg(msg):
if msg.is_at:
if '移除' in msg.text.lower():
remove_members( ) <------这个“某用户”的参数该如何传递?
|
closed
|
2017-04-14T03:34:34Z
|
2017-04-14T04:57:15Z
|
https://github.com/youfou/wxpy/issues/30
|
[] |
nkta3m
| 0
|
HumanSignal/labelImg
|
deep-learning
| 54
|
Can't open annotations with more than 6 boxes
|
We are using the prebuilt files (v1.2.1) for Windows and when entering more than 6 boxes than the annotations cannot be opened again. Everything looks fine for one up to 6 boxes but when a page contains 7 or more boxes then I cannot open the annotation again. Actually the following errors shows up in the command line window:
```
Traceback (most recent call last):
File "<string>", line 919, in openNextImg
File "<string>", line 741, in loadFile
File "<string>", line 1078, in loadPascalXMLByFilename
File "Z:\home\darrenl\tools\labelImg\build-tools\build\labelImg\out00-PYZ.pyz\
pascal_voc_io", line 115, in __init__
File "Z:\home\darrenl\tools\labelImg\build-tools\build\labelImg\out00-PYZ.pyz\
pascal_voc_io", line 131, in parseXML
File "Z:\home\darrenl\tools\labelImg\build-tools\build\labelImg\out00-PYZ.pyz\
xml.etree.ElementTree", line 1182, in parse
File "Z:\home\darrenl\tools\labelImg\build-tools\build\labelImg\out00-PYZ.pyz\
xml.etree.ElementTree", line 657, in parse
File "parser.pxi", line 1166, in lxml.etree._FeedParser.close (src/lxml/lxml.e
tree.c:76950)
File "parser.pxi", line 556, in lxml.etree._ParserContext._handleParseResult (
src/lxml/lxml.etree.c:71680)
File "parser.pxi", line 645, in lxml.etree._handleParseResult (src/lxml/lxml.e
tree.c:72614)
File "parser.pxi", line 585, in lxml.etree._raiseParseError (src/lxml/lxml.etr
ee.c:71955)
lxml.etree.XMLSyntaxError: internal error, line 6, column 15
```
I don't have any path beginning with "Z:\home\darrenl\tools\labelImg".
|
closed
|
2017-02-17T16:10:49Z
|
2017-02-20T09:33:03Z
|
https://github.com/HumanSignal/labelImg/issues/54
|
[] |
zuphilip
| 6
|
microsoft/MMdnn
|
tensorflow
| 133
|
Caffe to Tensorflow model ,error!
|
caffemodel for ctpn
deploy.prototxt
[deploy.txt](https://github.com/Microsoft/MMdnn/files/1865306/deploy.txt)
error information:
I0331 08:47:32.099702 7859 net.cpp:228] relu1_1 does not need backward computation.
I0331 08:47:32.099707 7859 net.cpp:228] conv1_1 does not need backward computation.
I0331 08:47:32.099712 7859 net.cpp:228] input does not need backward computation.
I0331 08:47:32.099717 7859 net.cpp:270] This network produces output rois
I0331 08:47:32.099723 7859 net.cpp:270] This network produces output scores
I0331 08:47:32.099759 7859 net.cpp:283] Network initialization done.
Traceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/usr/local/lib/python2.7/dist-packages/mmdnn/conversion/_script/convertToIR.py", line 164, in <module>
_main()
File "/usr/local/lib/python2.7/dist-packages/mmdnn/conversion/_script/convertToIR.py", line 159, in _main
ret = _convert(args)
File "/usr/local/lib/python2.7/dist-packages/mmdnn/conversion/_script/convertToIR.py", line 9, in _convert
transformer = CaffeTransformer(args.network, args.weights, "tensorflow", args.inputShape, phase = args.caffePhase)
File "/usr/local/lib/python2.7/dist-packages/mmdnn/conversion/caffe/transformer.py", line 316, in __init__
graph = GraphBuilder(def_path, self.input_shape, self.is_train_proto, phase).build()
File "/usr/local/lib/python2.7/dist-packages/mmdnn/conversion/caffe/graph.py", line 447, in build
graph.compute_output_shapes(self.model)
File "/usr/local/lib/python2.7/dist-packages/mmdnn/conversion/caffe/graph.py", line 266, in compute_output_shapes
node.output_shape = TensorShape(*NodeKind.compute_output_shape(node))
File "/usr/local/lib/python2.7/dist-packages/mmdnn/conversion/caffe/graph.py", line 126, in compute_output_shape
return LAYER_DESCRIPTORS[node.kind](node)
KeyError: None
|
open
|
2018-03-31T09:00:00Z
|
2018-04-12T04:55:34Z
|
https://github.com/microsoft/MMdnn/issues/133
|
[] |
FakerYFX
| 3
|
python-gino/gino
|
asyncio
| 157
|
Chinese Docs
|
Translations are supposed to be done on [Transifex](https://www.transifex.com/decentfox-studio/gino/), and the built docs can be found here: https://python-gino.org/docs/zh/master/index.html
Automated build is already set, only translation to go!
- [x] index (fantix)
- [x] tutorial (fantix)
- [ ] schema
- [ ] engine
- [ ] transaction
- [ ] crud
- [ ] relationship
API docs:
- [ ] `api.py`
- [ ] `engine.py`
- [ ] `crud.py`
- [ ] `declarative.py`
- [ ] `dialects/asyncpg.py`
- [ ] `json_support.py`
- [ ] `strategies.py`
- [ ] `schema.py`
- [ ] `transaction.py`
|
open
|
2018-03-14T08:26:43Z
|
2020-04-20T22:53:00Z
|
https://github.com/python-gino/gino/issues/157
|
[
"enhancement",
"help wanted"
] |
fantix
| 0
|
nonebot/nonebot2
|
fastapi
| 2,647
|
Plugin: 飞花令
|
### PyPI 项目名
nonebot-plugin-fhl
### 插件 import 包名
nonebot_plugin_fhl
### 标签
[{"label":"飞花令","color":"#ea5252"}]
### 插件配置项
_No response_
|
closed
|
2024-04-17T12:42:52Z
|
2024-04-18T06:10:12Z
|
https://github.com/nonebot/nonebot2/issues/2647
|
[
"Plugin"
] |
baiqwerdvd
| 1
|
jina-ai/serve
|
fastapi
| 5,474
|
feat: silence or minimize output of jina ping command
|
**Describe the feature**
<!-- A clear and concise description of what the feature is. -->
The jina ping command is also used for the startup, readinesss and/or liveness probe. The command currently by default prints the jina logo and the arguments of the command which pollutes the kubernetes events logger and the pod events.
```
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 16m default-scheduler Successfully assigned test-failure-scenarios/executor0-57788fb8fc-tjzbm to pytest-kind-control-plane
Normal Pulled 16m kubelet Container image "cr.l5d.io/linkerd/proxy-init:v1.5.3" already present on machine
Normal Created 16m kubelet Created container linkerd-init
Normal Started 16m kubelet Started container linkerd-init
Normal Pulled 16m kubelet Container image "cr.l5d.io/linkerd/proxy:stable-2.11.4" already present on machine
Normal Created 16m kubelet Created container linkerd-proxy
Normal Started 16m kubelet Started container linkerd-proxy
Warning Unhealthy 15m kubelet Liveness probe failed: executing liveness probe...
MMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMWWWMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM
MMMMMMMMMMMMMMMMMMMMMMMMMMMMMWNNNNNNNWMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM
MMMMMMMMMMMMMMMMMMMMMMMMMMMMMNNNNNNNNNMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM
MMMMMMMMMMMMMMMMMMMMMMMMMMMMMWNNNNNNNNMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM
MMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMWNNNWWMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM
MMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM
MMMMMMMMMMMMWxxxxxxxxxOMMMMMNxxxxxxxxx0MMMMMKddddddxkKWMMMMMMMMMMMMXOxdddxONMMMM
MMMMMMMMMMMMXllllllllldMMMMM0lllllllllxMMMMMOllllllllllo0MMMMMMMM0olllllllllo0MM
MMMMMMMMMMMMXllllllllldMMMMM0lllllllllxMMMMMOlllllllllllloWMMMMMdllllllllllllldM
MMMMMMMMMMMMXllllllllldMMMMM0lllllllllxMMMMMOllllllllllllloMMMM0lllllllllllllllK
MMMMMMMMMMMMKllllllllldMMMMM0lllllllllxMMMMMOllllllllllllllKMMM0lllllllllllllllO
MMMMMMMMMMMMKllllllllldMMMMM0lllllllllxMMMMMOllllllllllllll0MMMMollllllllllllllO
MWOkkkkk0MMMKlllllllllkMMMMM0lllllllllxMMMMMOllllllllllllll0MMMMMxlllllllllllllO
NkkkkkkkkkMMKlllllllloMMMMMM0lllllllllxMMMMMOllllllllllllll0MMMMMMWOdolllllllllO
KkkkkkkkkkNMKllllllldMMMMMMMMWWWWWWWWWMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM
MOkkkkkkk0MMKllllldXMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM
MMWX00KXMMMMXxk0XMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM
/usr/local/bin/jina ping executor
127.0.0.1:8080
╭─────────────────────────┬────────────────╮
│ Argument │ Value │
├─────────────────────────┼────────────────┤
│ attempts │ 1 │
│ cli │ ping │
│ host │ 127.0.0.1:8080 │
│ min-successful-attempts │ 1 │
│ target │ executor │
│ timeout │ 3000 │
╰─────────────────────────┴────────────────╯
INFO executor0@47 ping executor on 127.0.0.1:8080 at 0 [12/01/22 12:58:50]
round...
INFO executor0@47 ping executor on 127.0.0.1:8080 at 0
round takes 0 seconds (0.01s)
INFO executor0@47 avg. latency: 5 ms [12/01/22 12:58:51]
INFO executor0@47 readiness check succeeded 1 times!!!
```
**Your proposal**
<!-- copy past your code/pull request link -->
Remove the jina logo and args printing. Move the log messages to debug level.
---
<!-- Optional, but really help us locate the problem faster -->
**Environment**
<!-- Run `jina --version-full` and copy paste the output here -->
**Screenshots**
<!-- If applicable, add screenshots to help explain your problem. -->
|
closed
|
2022-12-01T14:09:06Z
|
2022-12-01T15:59:08Z
|
https://github.com/jina-ai/serve/issues/5474
|
[] |
girishc13
| 0
|
roboflow/supervision
|
computer-vision
| 896
|
IndexError:arrays used as indices must be of integer (or boolean) type
|
### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
while running yolo-nas and sam for segmentation of my sample video i am getting error as
ERROR:
detections = detections[detections.area == np.max(detections.area)]
---> 62 segmented_mask = mask_annotator.annotate(scene=frame, detections=detections)
63 return segmented_mask
[/usr/local/lib/python3.10/dist-packages/supervision/annotators/core.py](https://localhost:8080/#) in annotate(self, scene, detections, custom_color_lookup)
250 )
251 mask = detections.mask[detection_idx]
--> 252 colored_mask[mask] = color.as_bgr()
253
254 scene = cv2.addWeighted(colored_mask, self.opacity, scene, 1 - self.opacity, 0)
IndexError: arrays used as indices must be of integer (or boolean) type
### Additional
_No response_
|
closed
|
2024-02-13T16:25:29Z
|
2024-02-14T11:13:50Z
|
https://github.com/roboflow/supervision/issues/896
|
[
"question"
] |
josh-001
| 4
|
deepspeedai/DeepSpeed
|
pytorch
| 5,648
|
RuntimeError: still have inflight params[BUG]
|
**Describe the bug**
Hello,Can some one get Help. I use V0.14.3, installed from source code tar.gz: https://github.com/melMass/DeepSpeed/releases
I use deepspeed Zero3, and training LLama Factory KTO task, under the training-evaluate stage get this problem.
**Launcher context**
deepspeed --num_gpus 1 --master_port=9901 src/train.py .....
{
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"zero_allow_untested_optimizer": true,
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": "auto"
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"zero_optimization": {
"stage": 3,
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
}
}
**Docker context**
Are you using a specific docker image that you can share?
**Additional context**
RuntimeError: still have inflight params [{'id': 35, 'status': 'AVAILABLE', 'numel': 65536, 'ds_numel': 65536, 'shape': (4096, 16), 'ds_shape': (4096, 16), 'requires_grad': False, 'grad_shape': None, 'persist': False, 'active_sub_modules': set(), 'ds_tensor.shape': torch.Size([65536])}, {'id': 37, 'status': 'AVAILABLE', 'numel': 16384, 'ds_numel': 16384, 'shape': (1024, 16), 'ds_shape': (1024, 16), 'requires_grad': False, 'grad_shape': None, 'persist': True, 'active_sub_modules': set(), 'ds_tensor.shape': torch.Size([16384])}, {'id': 39, 'status': 'AVAILABLE', 'numel': 65536, 'ds_numel': 65536, 'shape': (4096, 16), 'ds_shape': (4096, 16), 'requires_grad': False, 'grad_shape': None, 'persist': False, 'active_sub_modules': set(), 'ds_tensor.shape': torch.Size([65536])}, {'id': 41, 'status': 'AVAILABLE', 'numel': 229376, 'ds_numel': 229376, 'shape': (14336, 16), 'ds_shape': (14336, 16), 'requires_grad': False, 'grad_shape': None, 'persist': False, 'active_sub_modules': set(), 'ds_tensor.shape': torch.Size([229376])}, {'id': 45, 'status': 'AVAILABLE', 'numel': 229376, 'ds_numel': 229376, 'shape': (14336, 16), 'ds_shape': (14336, 16), 'requires_grad': False, 'grad_shape': None, 'persist': False, 'active_sub_modules': set(), 'ds_tensor.shape': torch.Size([229376])}, {'id': 43, 'status': 'AVAILABLE', 'numel': 65536, 'ds_numel': 65536, 'shape': (4096, 16), 'ds_shape': (4096, 16), 'requires_grad': False, 'grad_shape': None, 'persist': False, 'active_sub_modules': set(), 'ds_tensor.shape': torch.Size([65536])}, {'id': 47, 'status': 'AVAILABLE', 'numel': 229376, 'ds_numel': 229376, 'shape': (14336, 16), 'ds_shape': (14336, 16), 'requires_grad': False, 'grad_shape': None, 'persist': False, 'active_sub_modules': set(), 'ds_tensor.shape': torch.Size([229376])}, {'id': 51, 'status': 'AVAILABLE', 'numel': 229376, 'ds_numel': 229376, 'shape': (14336, 16), 'ds_shape': (14336, 16), 'requires_grad': False, 'grad_shape': None, 'persist': False, 'active_sub_modules': set(), 'ds_tensor.shape': torch.Size([229376])}]
|
closed
|
2024-06-12T12:47:57Z
|
2024-08-03T16:32:31Z
|
https://github.com/deepspeedai/DeepSpeed/issues/5648
|
[
"bug",
"training"
] |
iszengxin
| 5
|
onnx/onnx
|
deep-learning
| 6,352
|
[Feature request] Better support for large models (>2GB) in extract_model
|
### System information
1.16.2
### What is the problem that this feature solves?
Allows for extracting sub-models form a large model (>2GB). When using this function (both with the loaded model and the model path), we are forced to do 2 things:
* `infer_shapes` with the loaded model (in `Extractor` init). This function does not work with models > 2GB; thus will return an empty graph.
* in `extract_model`, we are are forced extract the sub-models **with** the weights/external data. This could potentially lead to very large extracted submodels (`.onnx` file > 2GB); which will lead to failure of loading the submodels.
### Alternatives considered
If one seeks to use `extract_model`, there is no other solution besides editing the library code itself.
### Describe the feature
Pass in a parameter which allows to `load_external_data` in `extract_model`. Also alter how we init in the `Extractor` class.
```python
def extract_model(
input_path: str | os.PathLike,
output_path: str | os.PathLike,
input_names: list[str],
output_names: list[str],
check_model: bool = True,
load_external_data=False
) -> None:
e = Extractor(model, load_external_data)
```
```python
from onnx.shape_inference import infer_shapes_path, infer_shapes
class Extractor:
def __init__(self, model_path: str, load_external_data) -> None:
if load_external_data: # infer shape first, as loaded model + external data could be large
infer_shapes_path(model_path)
self.model = onnx.load(model_path, load_external_data)
else:
model = onnx.load(model_path, load_external_data)
self.model = shape_inference.infer_shapes(model)
self.graph = self.model.graph
self.wmap = self._build_name2obj_dict(self.graph.initializer)
self.vimap = self._build_name2obj_dict(self.graph.value_info)
```
### Will this influence the current api (Y/N)?
_No response_
### Feature Area
shape_inference, model usage
### Are you willing to contribute it (Y/N)
Yes
### Notes
_No response_
|
open
|
2024-09-09T08:58:13Z
|
2024-10-23T03:36:51Z
|
https://github.com/onnx/onnx/issues/6352
|
[
"topic: enhancement"
] |
highly0
| 3
|
Nike-Inc/koheesio
|
pydantic
| 42
|
[DOC] Wrong package manager in Contributing guide
|
Contributing guide refers to poetry as package manager: https://github.com/Nike-Inc/koheesio/blob/main/CONTRIBUTING.md
|
closed
|
2024-06-07T14:57:21Z
|
2024-06-21T19:15:42Z
|
https://github.com/Nike-Inc/koheesio/issues/42
|
[
"bug",
"documentation"
] |
riccamini
| 0
|
ultralytics/ultralytics
|
python
| 19,287
|
FastSam output
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hello,
I was trying FastSam on some images, but the outputs were not as I expected.
First, I used an object detector to extract the bounding boxes; then, I passed the bounding boxes to FastSam.
As you can observe, the bounding boxes change. Do you know if this behavior is expected?


### Additional
_No response_
|
closed
|
2025-02-17T23:29:09Z
|
2025-02-21T01:38:30Z
|
https://github.com/ultralytics/ultralytics/issues/19287
|
[
"question",
"detect"
] |
SebastianJanampa
| 4
|
plotly/dash
|
flask
| 2,406
|
[BUG] JS Error for a multi-page dash application when upgrading to > `2.7`
|
Noticing JS errors with a Dash multi-page Application with dash upgrade. No JS errors on Dash v`2.6.2`
- replace the result of `pip list | grep dash` below
```
app==0.0.1
boto==2.49.0
boto3==1.20.23
botocore==1.23.26
census==0.8.18
censusgeocode==0.5.1
dash==2.7.1
dash_bootstrap_components==1.2.0
dash_daq==0.5.0
Flask==1.1.4
Flask_Login==0.5.0
Flask_SQLAlchemy==2.5.1
geocoder==1.38.1
geolib==1.0.7
geopandas==0.10.2
geopy==2.2.0
pygeodesy==22.1.22
google_streetview==1.2.9
googlemaps==4.5.3
google-cloud-bigquery==2.31.0
mapbox==0.18.0
matplotlib==3.4.3
numpy==1.20.3
pandas==1.3.4
plotly==5.3.1
pyarrow==6.0.1
pandas-gbq==0.16.0
protobuf==3.19.1
pygeohash==1.2.0
rtree==0.9.7
pygeos==0.12.0
python_dateutil==2.8.2
Quandl==3.5.3
requests==2.26.0
sagemaker==2.70.0
scikit_learn==1.0.2
scipy==1.7.1
seaborn==0.11.2
sendgrid==6.7.0
Shapely==1.8.2
statsmodels==0.12.2
sympy==1.6.2
us==2.0.2
utm==0.7.0
walkscore_api==1.0.1
Werkzeug==1.0.1
xgboost==1.5.1
mysqlclient==2.1.0
pymysql==1.0.2
configparser==5.2.0
usaddress==0.5.0
markupsafe==2.0.1
geojson==2.5.0
```
- if frontend related, tell us your Browser, Version and OS
- Browser [chrome]
- Version [109.0.5414.119]
**Describe the bug**
I have a Dash App running inside a Flask App. I am seeing a bunch of JS errors in the console after upgrading dash to `2.7.1`. The Application renders a set of rows from a pandas dataframe containing `Lat`, `Long` on a mapbox map layer.
**Expected behavior**
The data must show on the map. It works as expected on dash v`2.6.1`.
More details about the code / app files can be found here: https://community.plotly.com/t/dash-js-errors-callback-error-updating/72124/4
|
closed
|
2023-01-31T23:10:41Z
|
2023-02-01T02:41:24Z
|
https://github.com/plotly/dash/issues/2406
|
[] |
kevalshah90
| 3
|
keras-rl/keras-rl
|
tensorflow
| 372
|
DDPG worked well but not CDQN or NAF !
|
I tried DDPG and everything worked well, now I am trying your NAF example model on my custom environment and I am getting this error:
```
---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
~\.conda\envs\sim\lib\site-packages\tensorflow\python\framework\ops.py in _create_c_op(graph, node_def, inputs, control_inputs)
1625 try:
-> 1626 c_op = c_api.TF_FinishOperation(op_desc)
1627 except errors.InvalidArgumentError as e:
InvalidArgumentError: Dimension 1 in both shapes must be equal, but are 1 and 171. Shapes are [?,1] and [?,171].
From merging shape 0 with other shapes. for 'naf_layer_1/concat/concat_dim' (op: 'Pack') with input shapes: [?,1], [?,171].
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
<ipython-input-9-c87ee2027050> in <module>
7 memory=memory, nb_steps_warmup=100, random_process=random_process,
8 gamma=.99, target_model_update=1e-3, processor=processor)
----> 9 agent.compile(Adam(lr=1e-5, clipnorm=1.), metrics=['mae'])
10
11 # Okay, now it's time to learn something! We visualize the training here for show, but this
~\.conda\envs\sim\lib\site-packages\rl\agents\dqn.py in compile(self, optimizer, metrics)
613
614 mu_out = self.mu_model(os_in)
--> 615 A_out = NAFLayer(self.nb_actions, mode=self.covariance_mode)([L_out, mu_out, a_in])
616 combined_out = Lambda(lambda x: x[0]+x[1], output_shape=lambda x: x[0])([A_out, V_out])
617 combined = Model(inputs=[a_in] + os_in, outputs=[combined_out])
~\.conda\envs\sim\lib\site-packages\keras\engine\base_layer.py in __call__(self, inputs, **kwargs)
455 # Actually call the layer,
456 # collecting output(s), mask(s), and shape(s).
--> 457 output = self.call(inputs, **kwargs)
458 output_mask = self.compute_mask(inputs, previous_mask)
459
~\.conda\envs\sim\lib\site-packages\rl\agents\dqn.py in call(self, x, mask)
431 try:
432 # Old TF behavior.
--> 433 L_flat = tf.concat(1, [zeros, L_flat])
434 except TypeError:
435 # New TF behavior
~\.conda\envs\sim\lib\site-packages\tensorflow\python\ops\array_ops.py in concat(values, axis, name)
1119 ops.convert_to_tensor(
1120 axis, name="concat_dim",
-> 1121 dtype=dtypes.int32).get_shape().assert_is_compatible_with(
1122 tensor_shape.scalar())
1123 return identity(values[0], name=scope)
~\.conda\envs\sim\lib\site-packages\tensorflow\python\framework\ops.py in convert_to_tensor(value, dtype, name, preferred_dtype)
1046 name=name,
1047 preferred_dtype=preferred_dtype,
-> 1048 as_ref=False)
1049
1050
~\.conda\envs\sim\lib\site-packages\tensorflow\python\framework\ops.py in internal_convert_to_tensor(value, dtype, name, as_ref, preferred_dtype, ctx)
1142
1143 if ret is None:
-> 1144 ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
1145
1146 if ret is NotImplemented:
~\.conda\envs\sim\lib\site-packages\tensorflow\python\ops\array_ops.py in _autopacking_conversion_function(v, dtype, name, as_ref)
969 elif dtype != inferred_dtype:
970 v = nest.map_structure(_cast_nested_seqs_to_dtype(dtype), v)
--> 971 return _autopacking_helper(v, dtype, name or "packed")
972
973
~\.conda\envs\sim\lib\site-packages\tensorflow\python\ops\array_ops.py in _autopacking_helper(list_or_tuple, dtype, name)
921 elems_as_tensors.append(
922 constant_op.constant(elem, dtype=dtype, name=str(i)))
--> 923 return gen_array_ops.pack(elems_as_tensors, name=scope)
924 else:
925 return converted_elems
~\.conda\envs\sim\lib\site-packages\tensorflow\python\ops\gen_array_ops.py in pack(values, axis, name)
4687 axis = _execute.make_int(axis, "axis")
4688 _, _, _op = _op_def_lib._apply_op_helper(
-> 4689 "Pack", values=values, axis=axis, name=name)
4690 _result = _op.outputs[:]
4691 _inputs_flat = _op.inputs
~\.conda\envs\sim\lib\site-packages\tensorflow\python\framework\op_def_library.py in _apply_op_helper(self, op_type_name, name, **keywords)
785 op = g.create_op(op_type_name, inputs, output_types, name=scope,
786 input_types=input_types, attrs=attr_protos,
--> 787 op_def=op_def)
788 return output_structure, op_def.is_stateful, op
789
~\.conda\envs\sim\lib\site-packages\tensorflow\python\util\deprecation.py in new_func(*args, **kwargs)
486 'in a future version' if date is None else ('after %s' % date),
487 instructions)
--> 488 return func(*args, **kwargs)
489 return tf_decorator.make_decorator(func, new_func, 'deprecated',
490 _add_deprecated_arg_notice_to_docstring(
~\.conda\envs\sim\lib\site-packages\tensorflow\python\framework\ops.py in create_op(***failed resolving arguments***)
3270 input_types=input_types,
3271 original_op=self._default_original_op,
-> 3272 op_def=op_def)
3273 self._create_op_helper(ret, compute_device=compute_device)
3274 return ret
~\.conda\envs\sim\lib\site-packages\tensorflow\python\framework\ops.py in __init__(self, node_def, g, inputs, output_types, control_inputs, input_types, original_op, op_def)
1788 op_def, inputs, node_def.attr)
1789 self._c_op = _create_c_op(self._graph, node_def, grouped_inputs,
-> 1790 control_input_ops)
1791
1792 # Initialize self._outputs.
~\.conda\envs\sim\lib\site-packages\tensorflow\python\framework\ops.py in _create_c_op(graph, node_def, inputs, control_inputs)
1627 except errors.InvalidArgumentError as e:
1628 # Convert to ValueError for backwards compatibility.
-> 1629 raise ValueError(str(e))
1630
1631 return c_op
ValueError: Dimension 1 in both shapes must be equal, but are 1 and 171. Shapes are [?,1] and [?,171].
From merging shape 0 with other shapes. for 'naf_layer_1/concat/concat_dim' (op: 'Pack') with input shapes: [?,1], [?,171].
```
1. keras version: 2.2.4
2. Tensorflow version: 1.11.0
I also tried changing the model from Sequential to functional API and I am getting the exact same error!
ps: 171 comes from:
```
# Number of elements in a triangular matrix.
nb_elems = (self.nb_actions * self.nb_actions + self.nb_actions) // 2
```
as my nb_actions = 18
|
closed
|
2020-12-30T15:10:27Z
|
2021-01-02T10:44:17Z
|
https://github.com/keras-rl/keras-rl/issues/372
|
[] |
B-Yassine
| 1
|
521xueweihan/HelloGitHub
|
python
| 2,768
|
【开源自荐】一款功能强大的 WEB 端创意画板
|
## 推荐项目
<!-- 这里是 HelloGitHub 月刊推荐项目的入口,欢迎自荐和推荐开源项目,唯一要求:请按照下面的提示介绍项目。-->
<!-- 点击上方 “Preview” 立刻查看提交的内容 -->
<!--仅收录 GitHub 上的开源项目,请填写 GitHub 的项目地址-->
- 项目地址:https://github.com/LHRUN/paint-board
<!--请从中选择(C、C#、C++、CSS、Go、Java、JS、Kotlin、Objective-C、PHP、Python、Ruby、Rust、Swift、其它、书籍、机器学习)-->
- 类别:JS
<!--请用 20 个左右的字描述它是做什么的,类似文章标题让人一目了然 -->
- 项目标题:Paint Board: 一款功能强大的 WEB 端创意画板
<!--这是个什么项目、能用来干什么、有什么特点或解决了什么痛点,适用于什么场景、能够让初学者学到什么。长度 32-256 字符-->
- 项目描述:Paint Board 是一款功能强大的 WEB 端创意画板,集成了多种创意画笔和辅助绘画功能,能够让用户体验到一个全新的绘画效果. 所有内容全部免费, 即开即用. 目前已支持多端,无论是移动端,还是PC端,都有较好的交互体验和效果展示。
<!--令人眼前一亮的点是什么?类比同类型项目有什么特点!-->
- 亮点:
- 自由绘画: 支持12种不同风格的画笔,提供高定制化配置, 以满足多样化的绘画需求
- 形状绘画: 支持多种常见形状绘制,并支持多端点和样式配置
- 橡皮擦: 可线性擦除所有内容,并支持线性宽度配置
- 选择模式: 支持框选所有绘制内容,提供拖拽、缩放、旋转等编辑方式. 并对所有绘制内容支持二次定制. 且支持图层配置
- 画板配置: 支持背景配置和自定义宽高,包括颜色、背景图和透明度
- 辅助绘画: 所有绘制内容支持辅助线和绘画缓存, PC端和移动端提供快捷键和手势友好交互
- 多功能菜单: 提供撤销、反撤销、复制当前选择内容、删除当前选择内容、绘制文字、上传图片、清除绘制内容、保存为图片、打开文件列表功能
- 多画板: 支持多画板切换和上传下载功能
- 截图:

- 后续更新计划:
- 多平台图库同步
- AI 增强绘制
|
closed
|
2024-06-10T13:34:45Z
|
2024-12-27T10:01:46Z
|
https://github.com/521xueweihan/HelloGitHub/issues/2768
|
[
"已发布"
] |
LHRUN
| 0
|
Textualize/rich
|
python
| 3,499
|
[BUG] rich handler is documented In textual but not rich documentation
|
rich loggings handler does not print rich objects as [documented](https://textual.textualize.io/guide/devtools/#logging-handler) in Textual
"The logging library works with strings only, so you won't be able to log Rich renderables such as self.tree with the logging handler."
This is documented in Textual but not the actual rich logging [documentation](https://rich.readthedocs.io/en/stable/logging.html)
|
open
|
2024-09-25T07:56:04Z
|
2024-09-25T07:56:21Z
|
https://github.com/Textualize/rich/issues/3499
|
[
"Needs triage"
] |
KRRT7
| 1
|
pyg-team/pytorch_geometric
|
deep-learning
| 9,176
|
SNAPDataset ram usage
|
### 🐛 Describe the bug
Ran the following code on python 3.10/3.11 and the process got killed by the os(tried on windows/wsl/mac)
for using too much RAM (tried to run both on a laptop with 16gb of memory and a desktop pc with 64gb of memory).
```python
from torch_geometric.datasets import SNAPDataset
dataset = SNAPDataset("./datasets/snap/twitter", "ego-twitter")
```
### Versions
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.0 (x86_64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: Could not collect
Libc version: N/A
Python version: 3.11.1 (main, Jan 26 2023, 14:19:45) [Clang 13.0.0 (clang-1300.0.29.30)] (64-bit runtime)
Python platform: macOS-14.0-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Intel(R) Core(TM) i5-1038NG7 CPU @ 2.00GHz
Versions of relevant libraries:
[pip3] mypy==0.991
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.24.1
[pip3] torch==2.0.1
[pip3] torch_geometric==2.4.0
[pip3] torchaudio==2.0.2
[pip3] torchvision==0.15.2
[conda] Could not collect
|
closed
|
2024-04-09T11:31:36Z
|
2024-04-12T12:57:08Z
|
https://github.com/pyg-team/pytorch_geometric/issues/9176
|
[
"bug"
] |
omrihaber
| 2
|
ipython/ipython
|
data-science
| 14,078
|
IPython/terminal/ptutils.py", line 116, in get_completions exc_type, exc_value, exc_tb = sys.exc_info() NameError: name 'sys' is not defined
|
```
...:
Traceback (most recent call last):
File "/root/anaconda3/lib/python3.8/site-packages/IPython/terminal/ptutils.py", line 113, in get_completions
yield from self._get_completions(body, offset, cursor_position, self.ipy_completer)
File "/root/anaconda3/lib/python3.8/site-packages/IPython/terminal/ptutils.py", line 129, in _get_completions
for c in completions:
File "/root/anaconda3/lib/python3.8/site-packages/IPython/core/completer.py", line 438, in _deduplicate_completions
completions = list(completions)
File "/root/anaconda3/lib/python3.8/site-packages/IPython/core/completer.py", line 1818, in completions
for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
File "/root/anaconda3/lib/python3.8/site-packages/IPython/core/completer.py", line 1861, in _completions
matched_text, matches, matches_origin, jedi_matches = self._complete(
File "/root/anaconda3/lib/python3.8/site-packages/IPython/core/completer.py", line 2029, in _complete
completions = self._jedi_matches(
File "/root/anaconda3/lib/python3.8/site-packages/IPython/core/completer.py", line 1373, in _jedi_matches
interpreter = jedi.Interpreter(
File "/root/anaconda3/lib/python3.8/site-packages/jedi/api/__init__.py", line 859, in __init__
project=Project(os.getcwd()), **kwds)
FileNotFoundError: [Errno 2] No such file or directory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/root/anaconda3/bin/ipython", line 11, in <module>
sys.exit(start_ipython())
File "/root/anaconda3/lib/python3.8/site-packages/IPython/__init__.py", line 126, in start_ipython
return launch_new_instance(argv=argv, **kwargs)
File "/root/anaconda3/lib/python3.8/site-packages/traitlets/config/application.py", line 845, in launch_instance
app.start()
File "/root/anaconda3/lib/python3.8/site-packages/IPython/terminal/ipapp.py", line 356, in start
self.shell.mainloop()
File "/root/anaconda3/lib/python3.8/site-packages/IPython/terminal/interactiveshell.py", line 564, in mainloop
self.interact()
File "/root/anaconda3/lib/python3.8/site-packages/IPython/terminal/interactiveshell.py", line 547, in interact
code = self.prompt_for_code()
File "/root/anaconda3/lib/python3.8/site-packages/IPython/terminal/interactiveshell.py", line 473, in prompt_for_code
text = self.pt_app.prompt(
File "/root/anaconda3/lib/python3.8/site-packages/prompt_toolkit/shortcuts/prompt.py", line 1013, in prompt
return self.app.run(set_exception_handler=set_exception_handler)
File "/root/anaconda3/lib/python3.8/site-packages/prompt_toolkit/application/application.py", line 816, in run
return loop.run_until_complete(
File "/root/anaconda3/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "/root/anaconda3/lib/python3.8/site-packages/prompt_toolkit/application/application.py", line 783, in run_async
return await _run_async2()
File "/root/anaconda3/lib/python3.8/site-packages/prompt_toolkit/application/application.py", line 771, in _run_async2
await self.cancel_and_wait_for_background_tasks()
File "/root/anaconda3/lib/python3.8/site-packages/prompt_toolkit/application/application.py", line 872, in cancel_and_wait_for_background_tasks
await task
File "/root/anaconda3/lib/python3.8/site-packages/prompt_toolkit/buffer.py", line 1854, in new_coroutine
await coroutine(*a, **kw)
File "/root/anaconda3/lib/python3.8/site-packages/prompt_toolkit/buffer.py", line 1683, in async_completer
async for completion in self.completer.get_completions_async(
File "/root/anaconda3/lib/python3.8/site-packages/prompt_toolkit/completion/base.py", line 269, in get_completions_async
async for completion in completer.get_completions_async(
File "/root/anaconda3/lib/python3.8/site-packages/prompt_toolkit/completion/base.py", line 196, in get_completions_async
for item in self.get_completions(document, complete_event):
File "/root/anaconda3/lib/python3.8/site-packages/IPython/terminal/ptutils.py", line 116, in get_completions
exc_type, exc_value, exc_tb = sys.exc_info()
NameError: name 'sys' is not defined
If you suspect this is an IPython 7.19.0 bug, please report it at:
https://github.com/ipython/ipython/issues
or send an email to the mailing list at ipython-dev@python.org
You can print a more detailed traceback right now with "%tb", or use "%debug"
to interactively debug it.
Extra-detailed tracebacks for bug-reporting purposes can be enabled via:
%config Application.verbose_crash=True
```
|
open
|
2023-05-14T13:14:46Z
|
2023-05-14T13:14:46Z
|
https://github.com/ipython/ipython/issues/14078
|
[] |
QGB
| 0
|
deepinsight/insightface
|
pytorch
| 2,227
|
How to use SCRFD detect() in latest insightface?
|
Hello,
I want to be able to call the detect here:
https://github.com/deepinsight/insightface/blob/6baaa7bcaf1a1624feec75270022e2dafeb6883b/detection/scrfd/tools/scrfd.py
I have this code:
```
detector = insightface.model_zoo.model_zoo.get_model('insightface/models/antelope/scrfd_10g_bnkps.onnx')
detector.prepare(0,input_size=(640,640))
bboxes,kpss = detector.detect(img,0.5,None,1,'default')
```
But this no longer works in the latest version.
Thanks!
|
closed
|
2023-01-23T16:43:11Z
|
2023-01-24T16:08:21Z
|
https://github.com/deepinsight/insightface/issues/2227
|
[] |
aesanchezgh
| 5
|
google/seq2seq
|
tensorflow
| 250
|
Attention Context's Interaction with Decoder
|
Hi, I am looking into how AttentionDecoder exactly works. I know the attention_context is supposed to be concatenated with the hidden state from the previous time step (Line 128 in seq2seq/decoder/attention_decoder.py) and it is fed into the current time step. But I noticed that the "attention_context" variable is ALSO concatenated with the embedded inputs and fed into the LSTM cell (Line 156 in seq2seq/decoder/attention_decoder.py). I confirmed this by printing the shape of LSTM's input tensor. I read the Google's NMT and GNMT papers and multiple others but it seems that the papers don't feed the concatenation of embedded inputs and attention_context to the LSTM cells. Do you have any references or papers that I can look at to understand the algorithm behind this implementation? Or maybe I am just misunderstanding the code?
|
open
|
2017-06-07T21:55:45Z
|
2017-06-07T22:08:13Z
|
https://github.com/google/seq2seq/issues/250
|
[] |
ghost
| 0
|
AirtestProject/Airtest
|
automation
| 548
|
Unable to launch the script in android 9 getting the below error RuntimeError: unable to launch AndroidUiautomationPoco
|
save log in '/private/var/folders/3r/j0yh54zj32b_pp_6336p9sx40000gp/T/AirtestIDE/scripts/f643ec8dd315e2d641f7865d1a6b739b'
[02:27:10][DEBUG]<airtest.core.android.adb> /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/airtest/core/android/static/adb/mac/adb -P 5037 -s 5ab2fefd wait-for-device
[02:27:10][DEBUG]<airtest.core.android.adb> /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/airtest/core/android/static/adb/mac/adb -P 5037 -s 5ab2fefd shell getprop ro.build.version.sdk
custom setup
[02:27:11][DEBUG]<airtest.core.android.adb> /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/airtest/core/android/static/adb/mac/adb -P 5037 -s 5ab2fefd shell dumpsys activity top
[02:27:11][DEBUG]<airtest.core.android.adb> /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/airtest/core/android/static/adb/mac/adb -P 5037 -s 5ab2fefd shell dumpsys package com.netease.nie.yosemite
[02:27:11][INFO]<airtest.core.android.yosemite> local version code is 288, installed version code is 288
[02:27:11][DEBUG]<airtest.core.android.adb> /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/airtest/core/android/static/adb/mac/adb -P 5037 -s 5ab2fefd shell settings get secure default_input_method
[02:27:11][DEBUG]<airtest.core.android.adb> /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/airtest/core/android/static/adb/mac/adb -P 5037 -s 5ab2fefd shell ime list -a
[02:27:11][DEBUG]<airtest.core.android.adb> /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/airtest/core/android/static/adb/mac/adb -P 5037 -s 5ab2fefd shell ime enable com.netease.nie.yosemite/.ime.ImeService
[02:27:11][DEBUG]<airtest.core.android.adb> /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/airtest/core/android/static/adb/mac/adb -P 5037 -s 5ab2fefd shell ime set com.netease.nie.yosemite/.ime.ImeService
[02:27:11][DEBUG]<airtest.core.android.adb> /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/airtest/core/android/static/adb/mac/adb -P 5037 -s 5ab2fefd shell dumpsys package com.netease.open.pocoservice
installed version is None, installer version is 41. force_reinstall=False
[02:27:11][DEBUG]<airtest.core.android.adb> /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/airtest/core/android/static/adb/mac/adb -P 5037 -s 5ab2fefd install /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/poco/drivers/android/lib/pocoservice-debug.apk
[02:27:14][DEBUG]<airtest.core.android.adb> /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/airtest/core/android/static/adb/mac/adb -P 5037 -s 5ab2fefd shell dumpsys package com.netease.open.pocoservice.test
installed version is 0, installer version is 0. force_reinstall=True
[02:27:14][DEBUG]<airtest.core.android.adb> /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/airtest/core/android/static/adb/mac/adb -P 5037 -s 5ab2fefd install -r /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/poco/drivers/android/lib/pocoservice-debug-androidTest.apk
[02:27:15][DEBUG]<airtest.core.android.adb> /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/airtest/core/android/static/adb/mac/adb -P 5037 -s 5ab2fefd forward --no-rebind tcp:16344 tcp:10080
[02:27:15][DEBUG]<airtest.core.android.adb> /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/airtest/core/android/static/adb/mac/adb -P 5037 -s 5ab2fefd forward --no-rebind tcp:17793 tcp:10081
[02:27:15][DEBUG]<airtest.core.android.adb> /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/airtest/core/android/static/adb/mac/adb -P 5037 -s 5ab2fefd shell ps
[02:27:16][DEBUG]<airtest.core.android.adb> /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/airtest/core/android/static/adb/mac/adb -P 5037 -s 5ab2fefd shell am force-stop com.netease.open.pocoservice
[02:27:16][DEBUG]<airtest.core.android.adb> /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/airtest/core/android/static/adb/mac/adb -P 5037 -s 5ab2fefd shell am start -n com.netease.open.pocoservice/.TestActivity
[02:27:16][DEBUG]<airtest.core.android.adb> /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/airtest/core/android/static/adb/mac/adb -P 5037 -s 5ab2fefd shell am instrument -w -e debug false -e class com.netease.open.pocoservice.InstrumentedTestAsLauncher com.netease.open.pocoservice.test/android.support.test.runner.AndroidJUnitRunner
still waiting for uiautomation ready.
still waiting for uiautomation ready.
still waiting for uiautomation ready.
still waiting for uiautomation ready.
still waiting for uiautomation ready.
still waiting for uiautomation ready.
[02:27:33][DEBUG]<airtest.core.android.adb> /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/airtest/core/android/static/adb/mac/adb -P 5037 -s 5ab2fefd shell am force-stop com.netease.open.pocoservice
still waiting for uiautomation ready.
still waiting for uiautomation ready.still waiting for uiautomation ready.
[02:27:33][DEBUG]<airtest.core.android.adb> /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/airtest/core/android/static/adb/mac/adb -P 5037 -s 5ab2fefd shell am instrument -w -e debug false -e class com.netease.open.pocoservice.InstrumentedTestAsLauncher com.netease.open.pocoservice.test/android.support.test.runner.AndroidJUnitRunner
still waiting for uiau[02:27:32][DEBUG]<airtest.core.android.adb> /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/airtest/core/android/static/adb/mac/adb -P 5037 -s 5ab2fefd shell dumpsys package com.netease.open.pocoservice.test
tomation ready.installed version is 0, installer version is 0. force_reinstall=True
[02:27:28][DEBUG]<airtest.core.android.adb> /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/airtest/core/android/static/adb/mac/adb -P 5037 -s 5ab2fefd uninstall com.netease.open.pocoservice
[02:27:29][DEBUG]<airtest.core.android.adb> /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/airtest/core/android/static/adb/mac/adb -P 5037 -s 5ab2fefd shell dumpsys package com.netease.open.pocoservice
installed version is None, installer version is 41. force_reinstall=False
[02:27:29][DEBUG]<airtest.core.android.adb> /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/airtest/core/android/static/adb/mac/adb -P 5037 -s 5ab2fefd install /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/poco/drivers/android/lib/pocoservice-debug.apk
[02:27:32][DEBUG]<airtest.core.android.adb> /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/airtest/core/android/static/adb/mac/adb -P 5037 -s 5ab2fefd install -r /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/poco/drivers/android/lib/pocoservice-debug-androidTest.apk
[02:27:33][Dstill waiting for uiautomation ready.
EBUG]<airtest.core.android.adb> /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/airtest/core/android/static/adb/mac/adb -P 5037 -s 5ab2fefd shell am start -n com.netease.open.pocoservice/.TestActivity
still waiting for uiautomation ready.
still waiting for uiautomation ready.
still waiting for uiautomation ready.
still waiting for uiautomation ready.
still waiting for uiautomation ready.
still waiting for uiautomation ready.
still waiting for uiautomation ready.
still waiting for uiautomation ready.
still waiting for uiautomation ready.
[02:27:45][DEBUG]<airtest.core.android.adb> /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/airtest/core/android/static/adb/mac/adb -P 5037 -s 5ab2fefd shell dumpsys activity top
custom tearDown
======================================================================
ERROR: runTest (__main__.CustomCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/airtest/cli/runner.py", line 65, in runTest
six.reraise(*sys.exc_info())
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/six.py", line 693, in reraise
raise value
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/airtest/cli/runner.py", line 61, in runTest
exec(compile(code.encode("utf-8"), pyfilepath, 'exec'), self.scope)
File "/Users/playsimplegames/qa_automation/wordwars1/ww_adb.air/ww_adb.py", line 11, in <module>
android_poco = AndroidUiautomationPoco()
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/poco/drivers/android/uiautomation.py", line 206, in __init__
raise RuntimeError("unable to launch AndroidUiautomationPoco")
RuntimeError: unable to launch AndroidUiautomationPoco
|
open
|
2019-10-01T09:00:33Z
|
2020-04-14T01:37:15Z
|
https://github.com/AirtestProject/Airtest/issues/548
|
[] |
anandplay17
| 2
|
dgtlmoon/changedetection.io
|
web-scraping
| 2,209
|
500 Internal Server Error
|
**Version**
0.45.14
I'm using changedetection.io Docker image. Unexpectedly, trying to access port 5000 results in receiving an Internal Server Error. In the container logs, the following error can be seen:
```
[2024-02-22 15:49:12,914] ERROR in app: Exception on / [GET]
Traceback (most recent call last):
File "/usr/local/jinja2/environment.py", line 466, in getitem
return obj[argument]
~~~^^^^^^^^^^
KeyError: 'last_changed'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/flask/app.py", line 2190, in wsgi_app
response = self.full_dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/flask/app.py", line 1486, in full_dispatch_request
rv = self.handle_user_exception(e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/flask_restful/__init__.py", line 298, in error_router
return original_handler(e)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/flask/app.py", line 1484, in full_dispatch_request
rv = self.dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/flask/app.py", line 1469, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/changedetectionio/flask_app.py", line 208, in decorated_view
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/app/changedetectionio/flask_app.py", line 460, in index
output = render_template(
^^^^^^^^^^^^^^^^
File "/usr/local/flask/templating.py", line 151, in render_template
return _render(app, template, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/flask/templating.py", line 132, in _render
rv = template.render(context)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/jinja2/environment.py", line 1301, in render
self.environment.handle_exception()
File "/usr/local/jinja2/environment.py", line 936, in handle_exception
raise rewrite_traceback_stack(source=source)
File "/app/changedetectionio/templates/watch-overview.html", line 1, in top-level template code
{% extends 'base.html' %}
File "/app/changedetectionio/templates/base.html", line 207, in top-level template code
{% block content %}{% endblock %}
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/changedetectionio/templates/watch-overview.html", line 81, in block 'content'
{% for watch in (watches|sort(attribute=sort_attribute, reverse=sort_order == 'asc'))|pagination_slice(skip=pagination.skip) %}
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/jinja2/filters.py", line 423, in do_sort
return sorted(value, key=key_func, reverse=reverse)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/jinja2/filters.py", line 112, in attrgetter
item_i = environment.getitem(item_i, part)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/jinja2/environment.py", line 475, in getitem
return getattr(obj, attr)
^^^^^^^^^^^^^^^^^^
File "/app/changedetectionio/model/Watch.py", line 193, in last_changed
return int(self.__newest_history_key)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: invalid literal for int() with base 10: '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0
```
I haven't made any changes in the configuration. After seeing the error, I updated the containers dgtlmoon/changedetection.io and browserless/chrome to the latest builds, but it didn't resolve the described issue.
|
closed
|
2024-02-22T16:05:19Z
|
2024-02-25T09:26:58Z
|
https://github.com/dgtlmoon/changedetection.io/issues/2209
|
[
"triage"
] |
ghost
| 2
|
skypilot-org/skypilot
|
data-science
| 4,480
|
[chore] Cleanup branches
|
<!-- Describe the bug report / feature request here -->
There are too many branches (~350) in the upstream, which makes forking and fetching quite messy. Could you please delete the merged branches? Thank you!
|
open
|
2024-12-18T07:52:24Z
|
2024-12-19T23:08:30Z
|
https://github.com/skypilot-org/skypilot/issues/4480
|
[] |
gaocegege
| 0
|
TencentARC/GFPGAN
|
deep-learning
| 47
|
Train with GPU and inference without GPU. Is it possible ?
|
Hello :)
One more - thank you very much for your beatifull project !
1. I trained model on my ouw dataset - mymodel.pth
2. I ran inference on CPU with your model - GFPGANCleanv1-NoCE-C2.pth
3. I see that GFPGANv1.pth (and mymodel.pth) has 2x size that GFPGANCleanv1-NoCE-C2.pth
So, how I can transform mymodel.pth for using inference on CPU ? or may be I should train anotther model ?
Thank you :))
|
closed
|
2021-08-18T22:53:00Z
|
2021-08-23T23:21:31Z
|
https://github.com/TencentARC/GFPGAN/issues/47
|
[] |
MDYLL
| 4
|
plotly/dash
|
data-visualization
| 2,329
|
Allow currency format for dcc.Input of type='number'
|
**Describe the solution you'd like**
Allow currency format for dcc.Input while keeping the value type as number and the arrows to increase/decrease it.
**Describe alternatives you've considered**
Clientside circular callback that updates the value format based on [this JS solution](https://stackoverflow.com/questions/9372624/formatting-a-number-as-currency-using-css):
```
import dash
from dash import Input, Output, State, callback, dcc, html, ctx
app = dash.Dash(__name__)
app.layout = html.Div([
dcc.Input(id='input-number', value=522.31, type='number')
])
app.clientside_callback(
"""
function(value) {
return value.toLocaleString('us-US', { style: 'currency', currency: 'USD' });
}
""",
Output('input-number', 'value'),
Input('input-number', 'value')
)
if __name__ == '__main__':
app.run_server(debug=True)
```
It raises this error in the JS Console:
```
react-dom@16.v2_7_0m1667919873.14.0.js:1857 The specified value "-US$1.00" cannot be parsed, or is out of range.
```
If `type='number'` is not specified, the value is accepted but the arrows to increase/decrease the value disappear.
Other option is a one-cell table with html.Buttons and a callback to increase/decrease the value (it's very complex for just a simple input)
|
open
|
2022-11-21T14:46:30Z
|
2024-08-13T19:22:53Z
|
https://github.com/plotly/dash/issues/2329
|
[
"feature",
"P3"
] |
celia-lm
| 0
|
pydata/xarray
|
numpy
| 9,204
|
DataTree.coords.__setitem__ is broken
|
> One messy thing is that it appears that assignment via `.coords` is broken on `DataTree` even at `main`.
Ah yes - I forgot there's a TODO deep somewhere for that 😅 I left it for later because it seemed like it might require changing the `DatasetCoordinates` class that `ds.coords` returns. (Surely that class could just be replaced by the new `xr.Coordinates` class now??)
_Originally posted by @TomNicholas in https://github.com/pydata/xarray/issues/9063#issuecomment-2189180572_
|
closed
|
2024-07-01T22:42:50Z
|
2024-09-11T04:03:34Z
|
https://github.com/pydata/xarray/issues/9204
|
[
"bug",
"topic-DataTree"
] |
shoyer
| 1
|
BeanieODM/beanie
|
asyncio
| 458
|
[BUG] PydanticObjectId Serialization Issue When Beanie is Used With Starlite
|
**Describe the bug**
Starlite raises an HTTP 500 error when trying to return a Beanie `Document`. It seems to be due to the `PydanticObjectId` type not being JSON serializable. The issue was discussed [here](https://github.com/starlite-api/starlite/discussions/819) on the Starlite repo. Is this an issue that can be fixed within Beanie, or should it be addressed within Starlite?
|
closed
|
2022-12-26T12:08:31Z
|
2023-01-09T16:18:55Z
|
https://github.com/BeanieODM/beanie/issues/458
|
[
"bug"
] |
bwhli
| 5
|
deeppavlov/DeepPavlov
|
tensorflow
| 1,550
|
refactor: tensorboard on pytorch
|
**What problem are we trying to solve?**:
```
Tensorboard used by tensorflow 1.15, that is going be removed.
```
**How can we solve it?**:
```
Rewrite tensorboard usage using pytorch api
```
|
closed
|
2022-04-12T07:06:31Z
|
2022-06-22T06:12:13Z
|
https://github.com/deeppavlov/DeepPavlov/issues/1550
|
[
"enhancement"
] |
IgnatovFedor
| 1
|
HIT-SCIR/ltp
|
nlp
| 368
|
如何fine tune做自己的任务
|
你好,我在尝试用ltp来fine tune做自己的任务,我该如何从哪个文件fine tune我的模型,
我尝试使用 python __main__.py --config "./tiny/config.json"
报错:
Traceback (most recent call last):
File "__main__.py", line 23, in <module>
run()
File "__main__.py", line 19, in run
Fire(Command)
File "/home/ray.yao/anaconda2/envs/ltp_demo/lib/python3.7/site-packages/fire/core.py", line 138, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/home/ray.yao/anaconda2/envs/ltp_demo/lib/python3.7/site-packages/fire/core.py", line 468, in _Fire
target=component.__name__)
File "/home/ray.yao/anaconda2/envs/ltp_demo/lib/python3.7/site-packages/fire/core.py", line 672, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/home/ray.yao/tmp/ltp-master/ltp/exe/commands.py", line 17, in __init__
self.config = Config(config, device)
File "/home/ray.yao/tmp/ltp-master/ltp/exe/config.py", line 20, in __init__
config = toml.load(config_path)
File "/home/ray.yao/anaconda2/envs/ltp_demo/lib/python3.7/site-packages/toml/decoder.py", line 134, in load
return loads(ffile.read(), _dict, decoder)
File "/home/ray.yao/anaconda2/envs/ltp_demo/lib/python3.7/site-packages/toml/decoder.py", line 214, in loads
" Reached end of line.", original, i)
toml.decoder.TomlDecodeError: Key name found without value. Reached end of line. (line 1 column 2 char 1)
|
closed
|
2020-06-18T10:30:40Z
|
2020-06-19T07:30:06Z
|
https://github.com/HIT-SCIR/ltp/issues/368
|
[] |
junrong1
| 0
|
microsoft/unilm
|
nlp
| 1,336
|
[kosmos-g] Problem about docker image setup
|
When installing xformers according to official instruction, it fails.
Low version of torch + high version of xformers is difficult to install.
Can anyone offer a docker image?
|
open
|
2023-10-17T09:08:25Z
|
2024-07-01T14:24:58Z
|
https://github.com/microsoft/unilm/issues/1336
|
[] |
caicj15
| 15
|
piskvorky/gensim
|
data-science
| 3,520
|
bug about remove_markup
|
#### Problem description
After calling gensim.corpora.wikicorpus.filter_wiki,there are still characters not been stripped.
```python
RE_P1 = re.compile(r'<ref([> ].*?)(</ref>|/>)', re.DOTALL | re.UNICODE)
```
Before stripping RE_P1, characters as following should be stripped.
```python
re.compile('(?:<br />|<br/>|<nowiki/>)', re.DOTALL | re.UNICODE)
```
#### Steps/code/corpus to reproduce
```python
import re
from gensim.corpora.wikicorpus import extract_pages,filter_wiki
# https://zh.wikipedia.org/wiki/%E5%90%89%E6%9E%97%E7%9C%81
s1 = '''
{{seealso|吉林省各市州人口列表}}
2022年末,全省总人口为2347.69万人<ref>吉林省2022年人口<nowiki/>https://www.hongheiku.com/sjrk/1059.html</ref>,其中城镇常住人口1496.18万人,占总人口比重(常住人口城镇化率)为63.73%,比上年末提高0.37个百分点。户籍人口城镇化率为49.08%。全年出生人口10.23万人,出生率为4.33‰;死亡人口19.84万人,死亡率为8.40‰;自然增长率为-4.07‰。人口性别比为99.83(以女性为100)。
'''
# https://zh.wikipedia.org/wiki/%E7%BB%8F%E6%B5%8E%E5%AD%A6
s2 = '''
羅賓斯認為,此定義注重的不是以經濟學「研究某些行為」,而是要以分析的角度去「研究行為是如何被資源有限的條件所改變」<ref>Robbins, Lionel (1932). ''An Essay on the Nature and Significance of Economic Science'', p. [http://books.google.com/books?id=nySoIkOgWQ4C&printsec=find&pg=PA16#v=onepage&q&f=false 16] {{Wayback|url=http://books.google.com/books?id=nySoIkOgWQ4C&printsec=find&pg=PA16#v=onepage&q&f=false |date=20130910062356 }}.</ref>。一些人批評此定義過度廣泛,而且無法將分析範疇侷限在對於市場的研究上。然而,自從1960年代起,由於理性選擇理論和其引發的[[賽局理論]]不斷將經濟學的研究領域擴張,這個定義已為世所認<ref name="Backhouse2009Stigler">Backhouse, Roger E., and Steven G. Medema (2009). "Defining Economics: The Long Road to Acceptance of the Robbins Definition", ''Economica'', 76(302), [http://onlinelibrary.wiley.com/doi/10.1111/j.1468-0335.2009.00789.x/full#ss4 V. Economics Spreads Its Wings] {{Wayback|url=http://onlinelibrary.wiley.com/doi/10.1111/j.1468-0335.2009.00789.x/full#ss4 |date=20130602222736 }}. [Pp. [http://onlinelibrary.wiley.com/doi/10.1111/j.1468-0335.2009.00789.x/full 805–820] {{Wayback|url=http://onlinelibrary.wiley.com/doi/10.1111/j.1468-0335.2009.00789.x/full |date=20130602222736 }}.]<br/> [[喬治·斯蒂格勒|Stigler, George J.]] (1984). "Economics—The Imperial Science?" ''Scandinavian Journal of Economics'', 86(3), pp. [http://www.jstor.org/pss/3439864 301]-313.</ref>,但仍有對此定義的批評<ref>Blaug, Mark (2007). "The Social Sciences: Economics", ''The New Encyclopædia Britannica'', v. 27, p. 343 [pp. 343–52].</ref>。
'''
print(filter_wiki(s1))
print(filter_wiki(s2))
print('=============')
RE_P = re.compile('(?:<br />|<br/>|<nowiki/>)', re.DOTALL | re.UNICODE)
print(filter_wiki(re.sub(RE_P, '', s1)))
print(filter_wiki(re.sub(RE_P, '', s2)))
```
### output:
2022年末,全省总人口为2347.69万人https://www.hongheiku.com/sjrk/1059.html,其中城镇常住人口1496.18万人,占总人口比重(常住人口城镇化率)为63.73%,比上年末提高0.37个百分点。户籍人口城镇化率为49.08%。全年出生人口10.23万人,出生率为4.33‰;死亡人口19.84万人,死亡率为8.40‰;自然增长率为-4.07‰。人口性别比为99.83(以女性为100)。
羅賓斯認為,此定義注重的不是以經濟學「研究某些行為」,而是要以分析的角度去「研究行為是如何被資源有限的條件所改變」。一些人批評此定義過度廣泛,而且無法將分析範疇侷限在對於市場的研究上。然而,自從1960年代起,由於理性選擇理論和其引發的賽局理論不斷將經濟學的研究領域擴張,這個定義已為世所認 Stigler, George J. (1984). "Economics—The Imperial Science?" ''Scandinavian Journal of Economics'', 86(3), pp. 301-313.,但仍有對此定義的批評。
=============
2022年末,全省总人口为2347.69万人,其中城镇常住人口1496.18万人,占总人口比重(常住人口城镇化率)为63.73%,比上年末提高0.37个百分点。户籍人口城镇化率为49.08%。全年出生人口10.23万人,出生率为4.33‰;死亡人口19.84万人,死亡率为8.40‰;自然增长率为-4.07‰。人口性别比为99.83(以女性为100)。
羅賓斯認為,此定義注重的不是以經濟學「研究某些行為」,而是要以分析的角度去「研究行為是如何被資源有限的條件所改變」。一些人批評此定義過度廣泛,而且無法將分析範疇侷限在對於市場的研究上。然而,自從1960年代起,由於理性選擇理論和其引發的賽局理論不斷將經濟學的研究領域擴張,這個定義已為世所認,但仍有對此定義的批評。
#### Versions
Linux-5.15.146.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Bits 64
NumPy 1.26.4
SciPy 1.12.0
gensim 4.3.2
FAST_VERSION 0
wiki text from [zhwiki-20231201-pages-articles-multistream1.xml-p1p187712.bz2](https://dumps.wikimedia.org/zhwiki/20231201/zhwiki-20231201-pages-articles-multistream1.xml-p1p187712.bz2)
|
open
|
2024-03-28T03:34:24Z
|
2024-04-11T08:27:31Z
|
https://github.com/piskvorky/gensim/issues/3520
|
[] |
seadog-www
| 2
|
chezou/tabula-py
|
pandas
| 350
|
dont ignore empty columns in tables spanning multiple pages
|
**Is your feature request related to a problem? Please describe.**
<!--- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
I have a pdf file with multiple pages. From page two to the end (page 29) I have one table basically spanning over all pages after page 2. now it happens that on some pages some columns might be empty as there are no values for these columns for the rows on that page. also the first line in the table is the name of the table.
the second line contains the actual header.
Now it seems to be impossible to read in the table. i cannot map the columns from page 2 line 2 to each df because some empty columns are just ignored.
**Describe the solution you'd like**
<!--- A clear and concise description of what you want to happen. -->
i would like tabula to not ignore empty columns in tables where the table is over multiple pages.
**Describe alternatives you've considered**
<!--- A clear and concise description of any alternative solutions or features you've considered. -->
the only alternative i see at this point is either trying to copy 28 pages of values by hand or trying to parse the pdf myself in python. but there i see my chances very low.
**Additional context**
<!--- Add any other context or screenshots about the feature request here. -->
Sorry, if that request seems to be unnecessary. I spent more than an hour searching for a solution to this and I was unable to find a solution.
|
closed
|
2023-06-26T16:44:31Z
|
2023-06-26T16:51:42Z
|
https://github.com/chezou/tabula-py/issues/350
|
[] |
awesome-crab
| 1
|
apache/airflow
|
machine-learning
| 47,646
|
Fix Connections Test Endpoint Should Allow None in the Request Body
|
### Description
The test endpoint doesn't need `Connection Type` since the connection should already exist in the database.
### Are you willing to submit a PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
|
closed
|
2025-03-11T21:25:04Z
|
2025-03-16T13:49:07Z
|
https://github.com/apache/airflow/issues/47646
|
[
"area:API",
"type:bug-fix",
"priority:medium",
"affected_version:3.0.0beta"
] |
bugraoz93
| 1
|
JaidedAI/EasyOCR
|
pytorch
| 601
|
Downloading detection model too slow
|
Hi, I when I run a code in Windows, it display "Downloading detection model, please wait. This may take several minutes depending upon your network connection."
Then it keep downloading for a long time.
even though I plug a VPN , its progress is very slow
I just use "pip install easyocr" to install
And the code is here:
#
import easyocr
#
reader = easyocr.Reader(['en'])
#
result = reader.readtext('1.jpg')
#
result
|
closed
|
2021-11-27T07:12:11Z
|
2022-08-07T05:01:23Z
|
https://github.com/JaidedAI/EasyOCR/issues/601
|
[] |
LiHangBing
| 2
|
huggingface/datasets
|
nlp
| 7,059
|
None values are skipped when reading jsonl in subobjects
|
### Describe the bug
I have been fighting against my machine since this morning only to find out this is some kind of a bug.
When loading a dataset composed of `metadata.jsonl`, if you have nullable values (Optional[str]), they can be ignored by the parser, shifting things around.
E.g., let's take this example
Here are two version of a same dataset:
[not-buggy.tar.gz](https://github.com/user-attachments/files/16333532/not-buggy.tar.gz)
[buggy.tar.gz](https://github.com/user-attachments/files/16333553/buggy.tar.gz)
### Steps to reproduce the bug
1. Load the `buggy.tar.gz` dataset
2. Print baseline of `dts = load_dataset("./data")["train"][0]["baselines]`
3. Load the `not-buggy.tar.gz` dataset
4. Print baseline of `dts = load_dataset("./data")["train"][0]["baselines]`
### Expected behavior
Both should have 4 baseline entries:
1. Buggy should have None followed by three lists
2. Non-Buggy should have four lists, and the first one should be an empty list.
One does not work, 2 works. Despite accepting None in another position than the first one.
### Environment info
- `datasets` version: 2.19.1
- Platform: Linux-6.5.0-44-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.23.0
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.3.1
|
open
|
2024-07-22T13:02:42Z
|
2024-07-22T13:02:53Z
|
https://github.com/huggingface/datasets/issues/7059
|
[] |
PonteIneptique
| 0
|
slackapi/python-slack-sdk
|
asyncio
| 1,590
|
Message ID from messages with files.
|
Hi!
`files_upload_v2` function does not return `ts` and `thread_ts` arguments. For my program, I need to know parent and child message ID. I tried to use `permalink `in blocks argument in `postMessage` function, but I get `invalid_blocks` error.
And for some reason there are two absolutely identical file objects in my response (I don’t know, maybe that’s how it should be).
**Question**: How can I get the parent and child message IDs for a message with attached files/images?
Thanks!
```
response = slack_client.files_upload_v2(
file = image_path
)
```
```
{
"ok": true,
"files": [
{
"id": "F0808JW9XMY",
"created": 1731277524,
"timestamp": 1731277524,
"name": "61GsgAsIIKL._AC_SL1400_.jpg",
"title": "Uploaded file",
"mimetype": "",
"filetype": "",
"pretty_type": "",
"user": "U07RZCBPMUL",
"user_team": "T0AJK4YAU",
"editable": false,
"size": 55409,
"mode": "hosted",
"is_external": false,
"external_type": "",
"is_public": false,
"public_url_shared": false,
"display_as_bot": false,
"username": "",
"url_private": "https://files.slack.com/files-pri/T0AJK4YAU-F0808JW9XMY/61gsgasiikl._ac_sl1400_.jpg",
"url_private_download": "https://files.slack.com/files-pri/T0AJK4YAU-F0808JW9XMY/download/61gsgasiikl._ac_sl1400_.jpg",
"media_display_type": "unknown",
"permalink": "https://kievhacklab.slack.com/files/U07RZCBPMUL/F0808JW9XMY/61gsgasiikl._ac_sl1400_.jpg",
"permalink_public": "https://slack-files.com/T0AJK4YAU-F0808JW9XMY-8c4adea2dd",
"comments_count": 0,
"is_starred": false,
"shares": {},
"channels": [],
"groups": [],
"ims": [],
"has_more_shares": false,
"has_rich_preview": false,
"file_access": "visible"
}
],
"file": {
"id": "F0808JW9XMY",
"created": 1731277524,
"timestamp": 1731277524,
"name": "61GsgAsIIKL._AC_SL1400_.jpg",
"title": "Uploaded file",
"mimetype": "",
"filetype": "",
"pretty_type": "",
"user": "U07RZCBPMUL",
"user_team": "T0AJK4YAU",
"editable": false,
"size": 55409,
"mode": "hosted",
"is_external": false,
"external_type": "",
"is_public": false,
"public_url_shared": false,
"display_as_bot": false,
"username": "",
"url_private": "https://files.slack.com/files-pri/T0AJK4YAU-F0808JW9XMY/61gsgasiikl._ac_sl1400_.jpg",
"url_private_download": "https://files.slack.com/files-pri/T0AJK4YAU-F0808JW9XMY/download/61gsgasiikl._ac_sl1400_.jpg",
"media_display_type": "unknown",
"permalink": "https://kievhacklab.slack.com/files/U07RZCBPMUL/F0808JW9XMY/61gsgasiikl._ac_sl1400_.jpg",
"permalink_public": "https://slack-files.com/T0AJK4YAU-F0808JW9XMY-8c4adea2dd",
"comments_count": 0,
"is_starred": false,
"shares": {},
"channels": [],
"groups": [],
"ims": [],
"has_more_shares": false,
"has_rich_preview": false,
"file_access": "visible"
}
}
```
```
blocks = [
{
"type": "image",
"title": {
"type": "plain_text",
"text": "Please enjoy this photo of a kitten"
},
"block_id": "image4",
"image_url": response['file']['permalink'],
"alt_text": "An incredibly cute kitten."
}
]
```
```
response = slack_client.chat_postMessage(
channel=channel,
text="Hello",
blocks=blocks
)
```
`Error sending message to Slack: invalid_blocks`
|
closed
|
2024-11-10T23:26:07Z
|
2024-11-10T23:35:16Z
|
https://github.com/slackapi/python-slack-sdk/issues/1590
|
[
"question",
"duplicate",
"web-client",
"Version: 3x"
] |
denikryt
| 2
|
CPJKU/madmom
|
numpy
| 364
|
TypeError: frame indices must be slices or integers
|
### Expected behaviour
madmom should iterate over `FramedSignal` independently how it is called.
### Actual behaviour
In Python 3 it fails if iterating over `np.arange`.
|
closed
|
2018-04-17T13:27:02Z
|
2018-04-18T07:13:49Z
|
https://github.com/CPJKU/madmom/issues/364
|
[] |
superbock
| 0
|
youfou/wxpy
|
api
| 72
|
可以发送收藏的表情吗?
|
如果可以怎么发送?
|
closed
|
2017-06-05T16:06:38Z
|
2017-06-06T04:27:09Z
|
https://github.com/youfou/wxpy/issues/72
|
[] |
tchy
| 1
|
numba/numba
|
numpy
| 9,463
|
How about unifying `int32` and `Literal[int](0)` as `int32`, rather than `int64`
|
code:
```python
import os
os.environ["NUMBA_DEBUG_TYPEINFER"] = "1"
from numba import njit
from numba.core import types
@njit
def foo(flag, v):
v2 = types.int32(v)
if flag:
return v2
else:
return 0
foo(True, 2)
```
the inferred return type is `int64`, can we consider unify them to `int32`. I feel the latter is more instinct.
```
---------------------------------Variable types---------------------------------
{'$14pred': bool,
'$18return_value.1': int64,
'$22return_value.1': int64,
'$2load_global.0': Module(<module 'numba.core.types' from '/Users/dali/Code/open-numba/numba/core/types/__init__.py'>),
'$4load_method.1': class(int32),
'$const20.0': Literal[int](0),
'arg.flag': bool,
'arg.v': int64,
'bool14': Function(<class 'bool'>),
'flag': bool,
'v': int64,
'v2': int32}
----------------------------------Return type-----------------------------------
int64
```
|
open
|
2024-02-24T00:26:06Z
|
2024-02-26T19:17:46Z
|
https://github.com/numba/numba/issues/9463
|
[
"feature_request",
"Blocked awaiting long term feature",
"NumPy 2.0"
] |
dlee992
| 2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.