repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
ymcui/Chinese-LLaMA-Alpaca
|
nlp
| 274
|
无法合并由alpaca-lora训练的权重,tokenizer使用llama-7b-hf和使用chinese_llama_plus_lora_7b中复制过去的
|
### 详细描述问题
*无法合并由alpaca-lora训练的权重,tokenizer使用llama-7b-hf和使用chinese_llama_plus_lora_7b中复制过去的。*
### 运行截图或log
<img width="1128" alt="image" src="https://user-images.githubusercontent.com/6939585/236842494-58ecb3f3-63a7-4022-82c2-fb6c29991a77.png">
<img width="349" alt="image" src="https://user-images.githubusercontent.com/6939585/236842585-5b17bdb0-8098-4e9e-8c7b-0d2b188c9114.png">
### 必查项目
- [ ] 哪个模型的问题:LLaMA
- [ ] 问题类型:
- 模型转换和合并
|
closed
|
2023-05-08T13:56:23Z
|
2023-05-21T22:02:19Z
|
https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/274
|
[
"stale"
] |
autoexpect
| 11
|
PaddlePaddle/PaddleHub
|
nlp
| 1,570
|
关于Pneumonia-CT-LKM-PP模型,如何了解它的结构和进行调参?
|
近期我也在倒腾“PaddleHub 新冠CT影像分析”这个项目,它用了一个预测模型Pneumonia_CT_LKM_PP,可是我有一些问题想向你请教:
1.我想了解这个预测模型的结构,我应该怎么去做?
2.我通过对这个预训练模型进行调参,用自己的数据集去训练,看看能不能多识别一些病灶,请问我应该怎么做呢?
|
open
|
2021-08-13T01:46:37Z
|
2021-08-16T01:51:44Z
|
https://github.com/PaddlePaddle/PaddleHub/issues/1570
|
[
"cv"
] |
LCreatorGH
| 1
|
aiortc/aiortc
|
asyncio
| 1,083
|
Bug in filexfer example: filename should be fp?
|
I must admit I didn't try to run the code. Just random stranger on the internet who was reading the example. It appears to me that the 'filename' parameter in run_answer on line 34 should be 'fp'. A) because main() passes the file pointer, not the name, and B) because fp is referenced on line 48, which without having tried it myself I assume will cause an exception.
https://github.com/aiortc/aiortc/blob/e9c13eab915ddc27f365356ed9ded0585b9a1bb7/examples/datachannel-filexfer/filexfer.py#L34C37-L34C45
|
closed
|
2024-04-12T14:02:28Z
|
2024-05-21T06:52:40Z
|
https://github.com/aiortc/aiortc/issues/1083
|
[] |
angererc
| 0
|
InstaPy/InstaPy
|
automation
| 6,332
|
Explanation of Unable to unfollow if the user have no post issue
|
### Errors related:
`Already unfollowed xxx! or a private user that rejected your req`
Even though the user you are unfollowing is definitely followed by you, the problem only happens if the user does not have any post and instagram will show follow suggestions for you instead.
### Explanation
During unfollowing, Instapy will run get_following_status() to check whether the user profile is followed by you, followed you (Follow back button), not followed by you, etc...
The problem is that the Follow button XPath (follow_button_XP), which currently is
```
"follow_button_XP": "//button[text()='Following' or \
text()='Requested' or \
text()='Follow' or \
text()='Follow Back' or \
text()='Unblock' and not(ancestor::*/@role = 'presentation')]"
```
And if the user does not have any post, it will show the follow suggestions instead which results in the xpath selecting the first follow button under follow suggestion (highlighted in red).

Hence, get_following_status() will return "Follow" and fall into this elif, which is incorrect.
```
elif following_status in ["Follow", "Follow Back"]:
logger.info(
"--> Already unfollowed '{}'! or a private user that "
"rejected your req".format(person)
)
post_unfollow_cleanup(
["successful", "uncertain"],
username,
person,
relationship_data,
person_id,
logger,
logfolder,
)
return False, "already unfollowed"
```
### Possible Solutions
I'm not really good at web automation, but the solution I could think of is to have more specific xpaths for different following_status and better if else catching of different scenarios
But I'm aware that the downside of more specific xpaths leads to having to update them more often?
#### I'm running the latest master of Instapy on Ubuntu 21.04 on Firefox 92
Please correct me I'm wrong. Thanks
|
open
|
2021-09-29T14:28:40Z
|
2021-10-24T20:29:27Z
|
https://github.com/InstaPy/InstaPy/issues/6332
|
[] |
Benjaminlooi
| 1
|
OpenInterpreter/open-interpreter
|
python
| 1,430
|
import statement missing in a module
|
### Describe the bug
/envs/openinterpreter/lib/python3.9/site-packages/interpreter/terminal_interface/validate_llm_settings.py
In this file, there is a missing import statement.
The line which needs to be added is:
from interpreter import interpreter
And then it runs just fine.
### Reproduce
1. just launch openinterpreter, and it will produce an error
### Expected behavior
To run properly
### Screenshots
_No response_
### Open Interpreter version
0.3.9
### Python version
3.9.19
### Operating System name and version
Debian GNU/Linux 11 (bullseye)
### Additional context
_No response_
|
closed
|
2024-08-27T14:20:59Z
|
2024-08-27T15:13:21Z
|
https://github.com/OpenInterpreter/open-interpreter/issues/1430
|
[] |
S-Vaisanen
| 0
|
jofpin/trape
|
flask
| 50
|
cannot track redirect
|
so i have set google as an attack perimeter and while i access the google link from there in the search bar when i type any website the error pops
that do you want to leave or stay nothing else
|
closed
|
2018-08-24T05:18:44Z
|
2018-11-24T01:56:07Z
|
https://github.com/jofpin/trape/issues/50
|
[] |
asciiterminal
| 2
|
ipython/ipython
|
data-science
| 14,041
|
Update type(...) to import the Type from it's canonical location
|
I missed that in the previous PR but those should likely be :
```
from types import MethodDescriptorType
from types import ModuleType
```
_Originally posted by @Carreau in https://github.com/ipython/ipython/pull/14029#discussion_r1180062854_
|
closed
|
2023-04-28T08:00:08Z
|
2023-06-02T08:37:34Z
|
https://github.com/ipython/ipython/issues/14041
|
[
"good first issue"
] |
Carreau
| 0
|
piskvorky/gensim
|
machine-learning
| 3,490
|
Compiled extensions are very slow when built with Cython 3.0.0
|
We should produce a minimum reproducible example and show the Cython guys
|
open
|
2023-08-23T11:24:48Z
|
2024-04-10T15:53:55Z
|
https://github.com/piskvorky/gensim/issues/3490
|
[] |
mpenkov
| 1
|
gunthercox/ChatterBot
|
machine-learning
| 2,256
|
mathparse and mathematical evaluation
|
**Question:** Is the code eighteen hundred and twenty-one?
**Result in a fatal error**

**Explanation:** in eighteen (and sixteen, seventeen, and all that jazz) chatterbot sees EIGHT first
hence turning "eighteen" in "8een"
hence the keyword error.
**Solution i found**: in the file `mathwords.py` (of the library `mathparse`), add a space `' '` after each number raising an issue (four, six, seven, eight, nine) to force chatterbot to see the difference bewteen eight and eighteen.

Haven't invistigate further for other language nor other mathematical evaluation.
Hope it will help whoever reads this.
Cheers
|
open
|
2022-06-30T14:19:20Z
|
2022-07-01T08:52:35Z
|
https://github.com/gunthercox/ChatterBot/issues/2256
|
[] |
AskellAytch
| 0
|
taverntesting/tavern
|
pytest
| 315
|
Integration in pytest broken
|
I've noticed that with the recent version `v0.26.1` the integration in pytest seems broken. The command line parameters `--tavern*` are no longer present.
When using `v0.22.0` the command line parameters in pytest are present
```
$ ./env/bin/pytest --help | grep tavern
--tavern-global-cfg=TAVERN_GLOBAL_CFG [TAVERN_GLOBAL_CFG ...]
--tavern-http-backend=TAVERN_HTTP_BACKEND
--tavern-mqtt-backend=TAVERN_MQTT_BACKEND
--tavern-strict={body,headers,redirect_query_params} [{body,headers,redirect_query_params} ...]
--tavern-beta-new-traceback
tavern-global-cfg (linelist) One or more global configuration files to include
tavern-http-backend (string) Which http backend to use
tavern-mqtt-backend (string) Which mqtt backend to use
tavern-strict (args) Default response matching strictness
tavern-beta-new-traceback (bool) Use new traceback style (beta)
```
When using `v0.26.1` the command line parameters are no longer present
```
$ ./env/bin/pytest --help | grep tavern
$
```
|
closed
|
2019-03-18T14:10:21Z
|
2019-04-10T16:27:52Z
|
https://github.com/taverntesting/tavern/issues/315
|
[] |
flazzarini
| 2
|
huggingface/text-generation-inference
|
nlp
| 2,263
|
Documentation about default values of model paramaters
|
### Feature request
In the documentation, there is not enough info about the default values TGI enforces if client request do not contain parameters like `temperature`, `top_p`, `presence_frequency` etc.
For e.g what would be the value setup by TGI if
- `temperature` = `null`
- `temperature` is not at all present in client request.
This would help users to adjust the client codebase when migrating to/from different serving frameworks .
As far I looked into code base I was unable to find a place where this is done.
### Motivation
Documentation for defaults model parameters
### Your contribution
I can create a PR if someone can point me to correct code base
|
open
|
2024-07-19T16:39:49Z
|
2024-07-30T07:53:15Z
|
https://github.com/huggingface/text-generation-inference/issues/2263
|
[] |
mohittalele
| 3
|
Anjok07/ultimatevocalremovergui
|
pytorch
| 1,426
|
Please Help
|
I just updated my app and I keep getting this error when I try to run this.
Last Error Received:
Process: Ensemble Mode
If this error persists, please contact the developers with the error details.
Raw Error Details:
RuntimeError: "Error opening 'F:\\AI Voice\\Songs\\1_1_[MV] PENTAGON(펜타곤) _ Like This_(Vocals)_(Vocals).wav': System error."
Traceback Error: "
File "UVR.py", line 6654, in process_start
File "UVR.py", line 794, in ensemble_outputs
File "lib_v5\spec_utils.py", line 590, in ensemble_inputs
File "soundfile.py", line 430, in write
File "soundfile.py", line 740, in __init__
File "soundfile.py", line 1264, in _open
File "soundfile.py", line 1455, in _error_check
"
Error Time Stamp [2024-06-26 19:29:18]
Full Application Settings:
vr_model: 6_HP-Karaoke-UVR
aggression_setting: 10
window_size: 512
mdx_segment_size: 256
batch_size: Default
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: Choose Model
segment: Default
overlap: 0.25
overlap_mdx: Default
overlap_mdx23: 8
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
is_mdx23_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: UVR-MDX-NET Karaoke 2
chunks: Auto
margin: 44100
compensate: Auto
denoise_option: None
is_match_frequency_pitch: True
phase_option: Automatic
phase_shifts: None
is_save_align: False
is_match_silence: True
is_spec_match: False
is_mdx_c_seg_def: False
is_invert_spec: False
is_deverb_vocals: False
deverb_vocal_opt: Main Vocals Only
voc_split_save_opt: Lead Only
is_mixer_mode: False
mdx_batch_size: Default
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Manual Ensemble
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_time_correction: True
is_gpu_conversion: True
is_primary_stem_only: False
is_secondary_stem_only: False
is_testing_audio: False
is_auto_update_model_params: True
is_add_model_name: False
is_accept_any_input: False
is_task_complete: False
is_normalization: False
is_use_opencl: False
is_wav_ensemble: False
is_create_model_folder: False
mp3_bit_set: 320k
semitone_shift: 0
save_format: WAV
wav_type_set: PCM_16
device_set: Default
help_hints_var: False
set_vocal_splitter: No Model Selected
is_set_vocal_splitter: False
is_save_inst_set_vocal_splitter: False
model_sample_mode: False
model_sample_mode_duration: 30
demucs_stems: All Stems
mdx_stems: All Stems
|
open
|
2024-06-26T23:30:24Z
|
2024-06-26T23:30:24Z
|
https://github.com/Anjok07/ultimatevocalremovergui/issues/1426
|
[] |
hollywoodemma
| 0
|
replicate/cog
|
tensorflow
| 1,790
|
Using the Predictor directly results in FieldInfo errors
|
`cog==0.9.6`
`pydantic==1.10.17`
`python==3.10.0`
Hello,
when writing non end-to-end tests for my cog predictor, in particular tests that instantiate then call the Predictor predict method directly, I ran into errors like
`TypeError: '<' not supported between instances of 'FieldInfo' and 'FieldInfo'`.
This is because unlike using the cog CLI or making a HTTP request to a running cog container, when using the `Predictor.predict` method directly, we're missing the arguments processing layer from cog.
So the default args remain pydantic FieldInfo objects (ie the return type of `cog.Input`) and virtually any basic operation on them will fail.
One possible way to workaround this I wrote below, but is there a better way to achieve this?
Thanks,
```python
PREDICTOR_RETURN_TYPE = inspect.signature(Predictor.predict).return_annotation
class TestPredictor(Predictor):
"""A class used only in non end-to-end tests, i.e. those that call directly
the Predictor. It is required because in the absence of the cog input processing layer
(that we get using the cog CLI), arguments to `predict()`that are not passed explicitly remain `FieldInfo` objects,
meaning any basic operation on them will raise an error."""
def predict(self, **kwargs: dict[str, Any]) -> PREDICTOR_RETURN_TYPE:
"""Processes the input (see main docstring) then call superclass' predict method.
Returns:
PREDICTOR_RETURN_TYPE: The output of the superclass `predict` method.
"""
for kwarg_name, kwarg in kwargs.items():
kwargs[kwarg_name] = kwarg.default if isinstance(kwarg, FieldInfo) else kwarg
# pass explicitly all other params
all_predict_params = inspect.signature(Predictor.predict).parameters
for param_name, param in all_predict_params.items():
if param_name != "self" and param_name not in kwargs:
kwargs[param_name] = (
param.default.default if isinstance(param.default, FieldInfo) else param.default
)
logger.info(f"Predicting with {kwargs}")
return super().predict(**kwargs)
```
|
open
|
2024-07-08T10:01:49Z
|
2024-07-08T17:00:28Z
|
https://github.com/replicate/cog/issues/1790
|
[] |
Clement-Lelievre
| 2
|
huggingface/datasets
|
pandas
| 6,663
|
`write_examples_on_file` and `write_batch` are broken in `ArrowWriter`
|
### Describe the bug
`write_examples_on_file` and `write_batch` are broken in `ArrowWriter` since #6636. The order between the columns and the schema is not preserved anymore. So these functions don't work anymore unless the order happens to align well.
### Steps to reproduce the bug
Try to do `write_batch` with anything that has many columns, and it's likely to break.
### Expected behavior
I expect these functions to work, instead of it trying to cast a column to its incorrect type.
### Environment info
- `datasets` version: 2.17.0
- Platform: Linux-5.15.0-1040-aws-x86_64-with-glibc2.35
- Python version: 3.10.13
- `huggingface_hub` version: 0.19.4
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0
|
closed
|
2024-02-15T01:43:27Z
|
2024-02-16T09:25:00Z
|
https://github.com/huggingface/datasets/issues/6663
|
[] |
bryant1410
| 3
|
dfm/corner.py
|
data-visualization
| 159
|
Error on import, pandas incompatibility?
|
Hello,
Today I noticed that I get an error when importing corner:
`import corner`
returns
```python
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/mdecleir/opt/anaconda3/envs/astroconda/lib/python3.7/site-packages/corner/__init__.py", line 6, in <module>
from .corner import corner
File "/Users/mdecleir/opt/anaconda3/envs/astroconda/lib/python3.7/site-packages/corner/corner.py", line 12, in <module>
from .arviz_corner import arviz_corner
File "/Users/mdecleir/opt/anaconda3/envs/astroconda/lib/python3.7/site-packages/corner/arviz_corner.py", line 9, in <module>
from arviz.data import convert_to_dataset
File "/Users/mdecleir/opt/anaconda3/envs/astroconda/lib/python3.7/site-packages/arviz/__init__.py", line 32, in <module>
from .data import *
File "/Users/mdecleir/opt/anaconda3/envs/astroconda/lib/python3.7/site-packages/arviz/data/__init__.py", line 2, in <module>
from .base import CoordSpec, DimSpec, dict_to_dataset, numpy_to_data_array
File "/Users/mdecleir/opt/anaconda3/envs/astroconda/lib/python3.7/site-packages/arviz/data/base.py", line 10, in <module>
import xarray as xr
File "/Users/mdecleir/opt/anaconda3/envs/astroconda/lib/python3.7/site-packages/xarray/__init__.py", line 12, in <module>
from .core.combine import concat, auto_combine
File "/Users/mdecleir/opt/anaconda3/envs/astroconda/lib/python3.7/site-packages/xarray/core/combine.py", line 11, in <module>
from .merge import merge
File "/Users/mdecleir/opt/anaconda3/envs/astroconda/lib/python3.7/site-packages/xarray/core/merge.py", line 10, in <module>
PANDAS_TYPES = (pd.Series, pd.DataFrame, pd.Panel)
File "/Users/mdecleir/opt/anaconda3/envs/astroconda/lib/python3.7/site-packages/pandas/__init__.py", line 244, in __getattr__
raise AttributeError(f"module 'pandas' has no attribute '{name}'")
AttributeError: module 'pandas' has no attribute 'Panel'
```
I did not see this error before, and I am wondering whether a recent update to pandas might cause this problem?
|
closed
|
2021-04-06T16:11:52Z
|
2021-04-06T18:49:19Z
|
https://github.com/dfm/corner.py/issues/159
|
[] |
mdecleir
| 5
|
521xueweihan/HelloGitHub
|
python
| 2,667
|
【开源自荐】🎉Vue TSX Admin, 中后台管理系统开发的新方向
|
## 推荐项目
<!-- 这里是 HelloGitHub 月刊推荐项目的入口,欢迎自荐和推荐开源项目,唯一要求:请按照下面的提示介绍项目。-->
<!-- 点击上方 “Preview” 立刻查看提交的内容 -->
<!--仅收录 GitHub 上的开源项目,请填写 GitHub 的项目地址-->
- 项目地址:https://github.com/manyuemeiquqi/vue-tsx-admin
<!--请从中选择(C、C#、C++、CSS、Go、Java、JS、Kotlin、Objective-C、PHP、Python、Ruby、Rust、Swift、其它、书籍、机器学习)-->
- 类别:Vue Admin
<!--请用 20 个左右的字描述它是做什么的,类似文章标题让人一目了然 -->
- 项目标题:vue-tsx-admin 是一个免费开源的中后台管理系统模块,帮助你快速搭建起一个中后台前端项目。
<!--这是个什么项目、能用来干什么、有什么特点或解决了什么痛点,适用于什么场景、能够让初学者学到什么。长度 32-256 字符-->
- 项目描述:它使用了最新的前端技术栈,完全采用 Vue3 + TSX 的模式进行开发,提供了开箱即用的中后台前端解决方案,内置了 i18n 国际化解决方案,可配置化布局,主题色修改,权限验证,提炼了典型的业务模型。
<!--令人眼前一亮的点是什么?类比同类型项目有什么特点!-->
- 亮点:
1. 完全使用 JSX 进行开发,适合喜爱 JSX 模式的开发者
2. 提供了开箱即用的中后台前端解决方案
3. 解决了中后台模板开发方式的痛点,比如 表格自定义列冗余、难以同一文件构建声明式弹窗、抽离组件需要同时关注 script 跟 模板的缺点
4. 同类开发模式的开源后台管理模板鲜有
- 示例代码:(可选)
```js
const handleError = () => {
Modal.error({
title: () => <div>error</div>,
content: () => (
<p>
<span>Message can be error</span>
<IconErro />
</p>
)
})
}
```
- 后续更新计划:
- 进行 bug 修复,项目开箱即用即可,不累加过多功能
|
closed
|
2024-01-02T04:59:11Z
|
2024-01-23T07:47:59Z
|
https://github.com/521xueweihan/HelloGitHub/issues/2667
|
[] |
manyuemeiquqi
| 2
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
computer-vision
| 735
|
optimal parameters for synthetic depth image to real depth image
|
Hi,
I am planning to train a cycleGAN model to translate synthetic depth images to real depth images from my depth-camera. I do not have any idea how to choose the optimal parameters. My synthetic and real images have size 480x640. I am planning to resize them to 512x512 -> load_size=512 and I will use crop_size=512.
I have following questions:
Is this a good idea to use cycleGAN for this problem ?
Should I use a different crop_size?
Should I change the network architecture?
Should I change the default parameters for netD and netG?
Here example synthetic and real depth images from my dataset:
<img width="514" alt="Bildschirmfoto 2019-08-19 um 16 13 58" src="https://user-images.githubusercontent.com/45715708/63272552-8e52f980-c29c-11e9-951c-4c986d76f0c2.png">
best regards
|
open
|
2019-08-19T14:16:42Z
|
2019-09-03T19:13:02Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/735
|
[] |
frk1993
| 3
|
graphql-python/graphql-core
|
graphql
| 139
|
Rename project/repository
|
hey there! This is a wonderful repository, and a really great reference implementation!
Here's why I think that this repository should be renamed to `graphql` or `graphql-py` instead of `graphql-core`
The current name sort of gives us a feel that it's not for the end user.
It gives us a feel that this project is only for other graphql libraries to build abstractions on top of.
in the GraphQL ecosystem, you'd generally build out your whole schema with graphql-js, in fact it was the only way during the start. It was after the reference implementation that we got apollo server/ nexus.
in the Python ecosystem, this seems like the opposite - graphql-core is relatively newer than graphene and is receiving lesser adoption rates.
also, I think that it would also help renaming the PyPI project name to `graphql` instead of graphql-core`.
This is my opinion, what do you think?
|
closed
|
2021-09-08T07:21:04Z
|
2021-09-19T11:03:19Z
|
https://github.com/graphql-python/graphql-core/issues/139
|
[] |
aryaniyaps
| 1
|
HumanSignal/labelImg
|
deep-learning
| 131
|
PyQt4, QImage.fromData(self.imageData) always return null image
|
labelImg always tell 'Make sure D:\sleep\r4_hyxx_2016.6.2_yw_6\JPEGImages\000001.jpg is a valid image file.'
I'm sure it's a valid image file.
I tried this
`filename='D:\\sleep\\r4_hyxx_2016.6.2_yw_6\\JPEGImages\\000001.jpg'
f=open(filename,'rb')
imageData=f.read()
fout=open('testout.jpg','wb')
fout.write(imageData)
fout.close()
image= QImage.fromData(imageData)
image.isNull()
`
It shows read and write jpg file correctly. I can get correct 'testout.jpg'.
But the image.isNull() always return true, so I get empty QImage.
windows10,
python 2.7
sig-4.19.3,
PyQt4-4.11.4
|
closed
|
2017-07-29T07:46:19Z
|
2018-06-23T18:25:57Z
|
https://github.com/HumanSignal/labelImg/issues/131
|
[] |
huqiaoping
| 2
|
Neoteroi/BlackSheep
|
asyncio
| 400
|
TestClient Example from Documentation and from GIT/BlackSheep-Examples fails
|
I’ve reproduced all steps by Coping/Pasting from official documentations https://www.neoteroi.dev/blacksheep/testing/ and **can't run test example**. Get different errors. I believe the official documentation is outdated and can’t be used for studding.
So, I downloaded **complete example from https://github.com/Neoteroi/BlackSheep-Examples/tree/main/testing-api**
and tried to run it. It **fails too**. Please find enclosed screenshot. I’ve tried to change versions of BlackSheep (1.2.18 / 2.0a9) and pydantic (1.10.10 / 2.0.2) in requrements.txt with no help. Tests always fail but due to selected version of packages can get additional errors.
**Please check if example at https://github.com/Neoteroi/BlackSheep-Examples/tree/main/testing-api can be fixed to use with some version of BlackSheep.**
And it would be great if you can update official documentation. Not only the article about testing but many other pages as they can’t be followed without getting errors.
Thanx.

pip list
Package Version
------------------ --------
annotated-types 0.5.0
blacksheep 2.0a9
certifi 2023.5.7
charset-normalizer 3.1.0
click 8.1.5
colorama 0.4.6
essentials 1.1.5
essentials-openapi 1.0.7
guardpost 1.0.2
h11 0.14.0
httptools 0.6.0
idna 3.4
iniconfig 2.0.0
itsdangerous 2.1.2
Jinja2 3.1.2
MarkupSafe 2.1.3
packaging 23.1
pip 22.3.1
pluggy 1.2.0
pydantic 2.0.2
pydantic_core 2.1.2
pytest 7.4.0
pytest-asyncio 0.21.1
python-dateutil 2.8.2
PyYAML 6.0
requests 2.31.0
rodi 2.0.2
setuptools 65.5.1
six 1.16.0
typing_extensions 4.7.1
urllib3 2.0.3
uvicorn 0.23.0
wheel 0.38.4
|
closed
|
2023-07-16T13:51:57Z
|
2023-11-26T09:20:37Z
|
https://github.com/Neoteroi/BlackSheep/issues/400
|
[] |
xxxxxxbox
| 1
|
robotframework/robotframework
|
automation
| 5,340
|
BDD prefixes with same beginning are not handled properly
|
**Description**:
When defining custom language prefixes for BDD steps in the Robot Framework (for example, in a French language module), I observed that adding multiple prefixes with overlapping substrings (e.g., "Sachant que" and "Sachant") leads to intermittent failures in matching the intended BDD step.
**Steps to Reproduce:**
Create a custom language (e.g., a French language class) inheriting from Language.
Define given_prefixes with overlapping entries such as:
`given_prefixes = ['Étant donné', 'Soit', 'Sachant que', 'Sachant']`
Run tests that rely on detecting BDD step prefixes.
Occasionally, tests fail with errors indicating that the keyword "Sachant que" cannot be found, even though it is defined.
Expected Behavior:
The regex built from the BDD prefixes should consistently match the longer, more specific prefix ("Sachant que") instead of prematurely matching the shorter prefix ("Sachant").
**Observed Behavior:**
Due to the unordered nature of Python sets, the bdd_prefixes property (which is constructed using a set) produces a regex with the alternatives in an arbitrary order. This sometimes results in the shorter prefix ("Sachant") matching before the longer one ("Sachant que"), causing intermittent failures.
**Potential Impact:**
Unpredictable test failures when multiple BDD prefixes with overlapping substrings are used.
Difficulty in debugging issues due to the non-deterministic nature of the problem.
Suggested Fix:
Sort the BDD prefixes by descending length before constructing the regular expression. For example:
```
@property
def bdd_prefix_regexp(self):
if not self._bdd_prefix_regexp:
# Sort prefixes by descending length so that the longest ones are matched first
prefixes = sorted(self.bdd_prefixes, key=len, reverse=True)
pattern = '|'.join(prefix.replace(' ', r'\s') for prefix in prefixes).lower()
self._bdd_prefix_regexp = re.compile(rf'({pattern})\s', re.IGNORECASE)
return self._bdd_prefix_regexp
```
This change would ensure that longer, more specific prefixes are matched before their shorter substrings, eliminating the intermittent failure.
**Environment:**
Robot Framework version: 7.1.1
Python version: 3.11.5
**Additional Notes:**
This bug appears to be caused by the unordered nature of sets in Python, which is used in the bdd_prefixes property to combine all prefixes. The issue might not manifest when there are only three prefixes or when there is no overlapping substring scenario.
I hope this detailed report helps in reproducing and resolving the issue.
|
closed
|
2025-02-16T22:29:14Z
|
2025-03-07T10:16:10Z
|
https://github.com/robotframework/robotframework/issues/5340
|
[
"priority: medium",
"acknowledge",
"effort: small"
] |
orenault
| 1
|
developmentseed/lonboard
|
data-visualization
| 198
|
Clean up memory after closing Map or stopping/restarting the python process
|
When visualizing a large dataset, the browser tab (or GPU memory) can hold a lot of memory, which isn't cleaned up by restarting the python process (testing this in a notebook, and so restarting the kernel).
Also explicitly calling `map.close()` or `map.close_all()` doesn't seem to help. Only actually closing the browser tab does clean up the memory.
Testing this using Firefox.
|
closed
|
2023-11-03T18:14:52Z
|
2024-02-27T20:21:52Z
|
https://github.com/developmentseed/lonboard/issues/198
|
[] |
jorisvandenbossche
| 3
|
tensorlayer/TensorLayer
|
tensorflow
| 1,152
|
Questions about PPO
|
I use PPO to make the car automatically find the way and avoid obstacles,but it didn't perform well. Similar examples use dqn network. Why can dqn but PPO not?
|
open
|
2022-01-15T00:57:20Z
|
2022-11-03T10:42:34Z
|
https://github.com/tensorlayer/TensorLayer/issues/1152
|
[] |
imitatorgkw
| 1
|
kornia/kornia
|
computer-vision
| 2,162
|
Add tensor to gradcheck into the base tester
|
Tensor to gradcheck should be also in the base tester
_Originally posted by @edgarriba in https://github.com/kornia/kornia/pull/2152#discussion_r1071806021_
apply to all tensors on the inputs of `self.gradcheck` automatically
|
closed
|
2023-01-20T12:25:46Z
|
2024-01-23T23:22:41Z
|
https://github.com/kornia/kornia/issues/2162
|
[
"enhancement :rocket:",
"help wanted"
] |
johnnv1
| 0
|
nvbn/thefuck
|
python
| 980
|
Give advice on Python version
|
the output of thefuck --version:
`The Fuck 3.29 using Python 3.7.3 and Bash 3.2.57(1)-release`
your system:
`macOS Sierra 10.12.6`
How to reproduce the bug:
I need to run python3 to use python on the terminal,but when I run python,I got this :
`(base) root:Desktop iuser$ python`
`-bash: /Library/Frameworks/Python.framework/Versions/3.5/bin/python3: No such file or directory`
`(base) root:Desktop iuser$ fuck`
`No fucks given`
I know this can be solved by modifying the configuration file.But can thefuck give advice on the correct version of Python like 'python3'?
My English is so pool...Have I made myself clear?
|
open
|
2019-10-16T07:38:25Z
|
2019-10-20T00:22:12Z
|
https://github.com/nvbn/thefuck/issues/980
|
[] |
icditwang
| 3
|
zappa/Zappa
|
django
| 1,343
|
Changing batch size in SQS event does not actually change the batch size off SQS trigger
|
<!--- Provide a general summary of the issue in the Title above -->
## Context
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 3.8/3.9/3.10/3.11/3.12 -->
I was updating a SQS event batch size with the zappa settings and assumed it had worked and did not even check the console to make sure. Later I was testing another function that invoked the previous one and in doing so I noticed the batch size was not increased.
## Expected Behavior
<!--- Tell us what should happen -->
I changed the batch size in the zappa settings from 1 > 5 and the SQS trigger batch size should change from 1 > 5.
Old Settings:
```
{
"function": "app.lambda_handler",
"event_source": {
"arn": "arn:aws:sqs:us-west-2:1........",
"batch_size": 1,
"enabled": true
}
}
```
New Settings:
```
{
"function": "app.lambda_handler",
"event_source": {
"arn": "arn:aws:sqs:us-west-2:1........",
"batch_size": 5,
"enabled": true
}
}
```
## Actual Behavior
<!--- Tell us what happens instead -->
Nothing happens.
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1. Create a deployment with an SQS event with a batch size of 1.
2. Change the batch size to 5 and update the function.
3. Check AWS console to see the batch size has not changed.
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.59.0
* Operating System and Python version: MacOS Python 3.12
|
closed
|
2024-07-26T12:53:29Z
|
2024-11-10T16:40:24Z
|
https://github.com/zappa/Zappa/issues/1343
|
[
"no-activity",
"auto-closed"
] |
lmuther8
| 4
|
python-restx/flask-restx
|
flask
| 633
|
Dead or alive ?
|
Is this project still alive or is it considered dead ?
**Additional context**
I am seeing a lot of PR (40+) and close to 300 open issues on this project, hence some minor but recent (~3 months ago) changes were made on the code (preparing a version 1.3.1) but I can't see any real progress lately.
That being said, this is just an attempt to understand the project's future - not judging anything.
I've been using this project for some of my stacks so it would be beautiful to see this alive and moving forward.
Best
Mike
|
open
|
2025-02-05T09:17:51Z
|
2025-02-15T15:24:39Z
|
https://github.com/python-restx/flask-restx/issues/633
|
[
"question"
] |
MikeSchiessl
| 0
|
huggingface/datasets
|
computer-vision
| 6,854
|
Wrong example of usage when config name is missing for community script-datasets
|
As reported by @Wauplin, when loading a community dataset with script, there is a bug in the example of usage of the error message if the dataset has multiple configs (and no default config) and the user does not pass any config. For example:
```python
>>> ds = load_dataset("google/fleurs")
ValueError: Config name is missing.
Please pick one among the available configs: ['af_za', 'am_et', 'ar_eg', 'as_in', 'ast_es', 'az_az', 'be_by', 'bg_bg', 'bn_in', 'bs_ba', 'ca_es', 'ceb_ph', 'ckb_iq', 'cmn_hans_cn', 'cs_cz', 'cy_gb', 'da_dk', 'de_de', 'el_gr', 'en_us', 'es_419', 'et_ee', 'fa_ir', 'ff_sn', 'fi_fi', 'fil_ph', 'fr_fr', 'ga_ie', 'gl_es', 'gu_in', 'ha_ng', 'he_il', 'hi_in', 'hr_hr', 'hu_hu', 'hy_am', 'id_id', 'ig_ng', 'is_is', 'it_it', 'ja_jp', 'jv_id', 'ka_ge', 'kam_ke', 'kea_cv', 'kk_kz', 'km_kh', 'kn_in', 'ko_kr', 'ky_kg', 'lb_lu', 'lg_ug', 'ln_cd', 'lo_la', 'lt_lt', 'luo_ke', 'lv_lv', 'mi_nz', 'mk_mk', 'ml_in', 'mn_mn', 'mr_in', 'ms_my', 'mt_mt', 'my_mm', 'nb_no', 'ne_np', 'nl_nl', 'nso_za', 'ny_mw', 'oc_fr', 'om_et', 'or_in', 'pa_in', 'pl_pl', 'ps_af', 'pt_br', 'ro_ro', 'ru_ru', 'sd_in', 'sk_sk', 'sl_si', 'sn_zw', 'so_so', 'sr_rs', 'sv_se', 'sw_ke', 'ta_in', 'te_in', 'tg_tj', 'th_th', 'tr_tr', 'uk_ua', 'umb_ao', 'ur_pk', 'uz_uz', 'vi_vn', 'wo_sn', 'xh_za', 'yo_ng', 'yue_hant_hk', 'zu_za', 'all']
Example of usage:
`load_dataset('fleurs', 'af_za')`
```
Note the example of usage in the error message suggests loading "fleurs" instead of "google/fleurs".
|
closed
|
2024-05-02T06:59:39Z
|
2024-05-03T15:51:59Z
|
https://github.com/huggingface/datasets/issues/6854
|
[
"bug"
] |
albertvillanova
| 0
|
d2l-ai/d2l-en
|
deep-learning
| 2,489
|
installing d2l package on google colab
|
I've tried to run the codes on google colab.
So I tried `pip install d2l==1.0.0-beta0`, but didn't work.
So I used `pip install d2l==1.0.0-alpha0`, and tried to run the following code.
```python
d2l.set_figsize()
img = d2l.plt.imread('../img/catdog.jpg')
d2l.plt.imshow(img);
```
But it says `FileNotFoundError: [Errno 2] No such file or directory: '../img/catdog.jpg'`
How can I run codes in this book on google colab?😭
|
closed
|
2023-05-15T05:50:24Z
|
2023-07-10T13:26:54Z
|
https://github.com/d2l-ai/d2l-en/issues/2489
|
[
"bug"
] |
minjae35
| 1
|
mwaskom/seaborn
|
matplotlib
| 3,382
|
sns.scatterplot
|
**_**hello,sir!
i find a question,When I customized the color range, I found through Searbon that it didn't follow my custom colors,the legend shows color is wrong
codeing:
merged_df1= pd.read_csv("C:\\Users\\Administrator\\Desktop\\data.csv")
plt.figure(figsize=(8.5, 8))
thresholds = [5,50,100,200]
colors = ['darkslategrey','skyblue', 'deepskyblue', 'white','orange']
legend_patches = [
mpatches.Patch(color=colors[0], label=f'<{thresholds[0]}'),
mpatches.Patch(color=colors[1], label=f'{thresholds[0]} - {thresholds[1]}'),
mpatches.Patch(color=colors[2], label=f'{thresholds[1]}- {thresholds[2]}'),
mpatches.Patch(color=colors[3], label=f'{thresholds[2]}- {thresholds[3]}'),
mpatches.Patch(color=colors[4], label=f'>{thresholds[3]}')
]
conditions = [
(merged_df1['logpvalue'] < thresholds[0]),
(merged_df1['logpvalue'] >= thresholds[0]) & (merged_df1['logpvalue'] < thresholds[1]),
(merged_df1['logpvalue'] >= thresholds[1]) & (merged_df1['logpvalue'] < thresholds[2]),
(merged_df1['logpvalue'] >= thresholds[2]) & (merged_df1['logpvalue'] < thresholds[3]),
(merged_df1['logpvalue'] >= thresholds[3])
]
color_array = np.select(conditions, colors)
cmap=sns.color_palette(colors, as_cmap=True)
sns.scatterplot(x=merged_df1['group'], y=merged_df1['gene'], hue=color_array,size=merged_df1['tpm'], sizes=(5, 250),legend='auto',palette=cmap,edgecolor='black')
plt.title('Bubble Chart')
plt.xlabel('tissue')
plt.ylabel('motif')
sizes = [30, 100, 200]
legend_sizes = [plt.scatter([], [], s=size, color='grey', alpha=0.6) for size in sizes]
legend_labels = [f'TPM: {size}' for size in sizes]
a=plt.legend(loc='upper right', bbox_to_anchor=(1.2,1),prop={'size': 10},handles=legend_patches, title='-log pvalue')
plt.legend(legend_sizes, legend_labels, title='Size', loc='upper right',bbox_to_anchor=(1.2, 0.75),prop={'size': 10})
plt.gca().add_artist(a)



|
closed
|
2023-06-12T09:33:48Z
|
2023-06-12T12:36:22Z
|
https://github.com/mwaskom/seaborn/issues/3382
|
[] |
gavinjzg
| 2
|
tox-dev/tox
|
automation
| 3,292
|
{env_name} substitution broken since version 4.14.1
|
## Issue
I'm creating a pipeline where a coverage report is generated over multiple test runs (spanning different Python versions). To do this, I want to include the tox environment in the coverage file name, so the artifacts don't overwrite each other in the pipeline job that combines the coverage files.
It's configured as follows (with some sections left out):
```
[testenv]
package = wheel
deps =
coverage[toml] ~= 7.0
commands_pre =
python -m coverage erase
commands =
python -m coverage run --data-file=.coverage.{env_name} --branch --source my.project -m unittest discover {posargs:tests/unit}
```
This worked fine up to tox 4.14.0:
```
py311: commands_pre[0]> python -m coverage erase
py311: commands[0]> python -m coverage run --data-file=.coverage.py311 --branch --source my.project -m unittest discover tests/unit
```
However, from tox 14.4.1 onwards, the `env_name` variable isn't substituted anymore, and just comes through as `{env_name}`
```
py311: commands_pre[0]> python -m coverage erase
py311: commands[0]> python -m coverage run --data-file=.coverage.{env_name} --branch --source myproject -m unittest discover tests/unit
```
## Environment
Provide at least:
- OS: Windows 10
<details open>
<summary>Output of <code>pip list</code> of the host Python, where <code>tox</code> is installed</summary>
```console
Package Version
------------- -------
cachetools 5.3.3
chardet 5.2.0
colorama 0.4.6
distlib 0.3.8
filelock 3.14.0
packaging 24.1
pip 24.0
platformdirs 4.2.2
pluggy 1.5.0
pyproject-api 1.6.1
setuptools 65.5.0
tomli 2.0.1
tox 4.14.1
virtualenv 20.26.2
```
</details>
## Output of running tox
See above.
## Minimal example
I tried to create a minimal example by creating a tox configuration that doesn't build and install a package, but for some reason the substitution in the command did work there. So somehow the package building and installing is important for the issue to occur.
Unfortunately, I can't share my entire tox config, as it's part of a corporate repo.
|
closed
|
2024-06-10T14:48:34Z
|
2024-08-20T13:53:47Z
|
https://github.com/tox-dev/tox/issues/3292
|
[] |
rhabarberbarbara
| 12
|
Nemo2011/bilibili-api
|
api
| 820
|
发布动态报错'str' object has no attribute 'content'
|
try:
global last_id
dyn_req = dynamic.BuildDynamic.empty()
upload_files = ['A','B','C','D','E']
for item in upload_files:
item_path = f"{BASE_PATH}{item}.JPG"
result = await dynamic.upload_image(image=item_path,credential=credential)
#if debug:print(result)
if 'image_url' in result:
dyn_req.add_image(Picture.from_url(result['image_url']))
await asyncio.sleep(5)
dyn_req.add_plain_text(text=dynamic_title)
#dyn_req.set_options(close_comment=True)
result = await dynamic.send_dynamic(dyn_req,credential)
# if debug:print(result)
if result: last_id = result
except Exception as e:
print(f"发布动态出错, {e}")
提示发布动态出错, 'str' object has no attribute 'content'
|
closed
|
2024-10-06T08:33:08Z
|
2024-10-07T03:37:54Z
|
https://github.com/Nemo2011/bilibili-api/issues/820
|
[
"question"
] |
farjar
| 1
|
tiangolo/uvicorn-gunicorn-fastapi-docker
|
pydantic
| 84
|
How to pass "live auto-reload option" with docker-compose
|
hello there,
- Is it possible to pass live auto-reload option (/start-reload.sh) to our docker-compose.yml file?
- Is it possible to pass live auto-reload option (/start-reload.sh) to the docker-compose up -d command?
many thanks
|
closed
|
2021-04-22T08:56:25Z
|
2021-04-22T12:07:19Z
|
https://github.com/tiangolo/uvicorn-gunicorn-fastapi-docker/issues/84
|
[] |
ghost
| 1
|
clovaai/donut
|
computer-vision
| 30
|
donut processing on PDF Documents
|
Hello,
I have few certificate documents which are in a PDF format. I want to extract metadata on those documents as suggested by you .
Could you please clarify me on the below points.
1.Can I use your model directly without pretraining on the certificate data.
2. How to train your model on my certificates as it is confidential and what folder structure are you expecting to train the data.
3. How do I convert my dataset into your format (synthdog) – It was not much clear to me.
Thank you and looking forward to your response.
Best Regards,
Arun
|
closed
|
2022-08-19T11:53:41Z
|
2022-08-24T02:48:54Z
|
https://github.com/clovaai/donut/issues/30
|
[] |
Arun4GS
| 4
|
keras-team/keras
|
data-science
| 20,169
|
How to make dynamic assetions in Keras v3?
|
In Tensorflow with Keras v2 i have 2 common case to make dynamic assertions:
1. Dynamic shape assertion
```python
x = <some input tensor in NHWC format>
min_size = tf.reduce_min(tf.shape(x)[1:3])
assert_size = tf.assert_greater(min_size, 2)
with tf.control_dependencies([assert_size]):
<apply some ops that work correctly only with inputs of minimum size = 2>
```
2. Dynamic value assertion
```python
x = <some input tensor>
max_val = <some input value>
max_delta = ops.max(x) - ops.min(x)
assert_delta = tf.assert_less(max_delta, max_val + backend.epsilon())
with tf.control_dependencies([assert_delta]):
<apply some ops that work correctly only with inputs if assertion success>
```
How can i make such assertions in Keras V3? I couldn't find any example in sources.
|
closed
|
2024-08-27T06:13:22Z
|
2024-09-13T12:02:21Z
|
https://github.com/keras-team/keras/issues/20169
|
[
"type:support",
"stat:awaiting response from contributor",
"stale"
] |
shkarupa-alex
| 3
|
deepset-ai/haystack
|
machine-learning
| 9,088
|
Docs for HierarchicalDocumentSplitter
|
Added in #9067
|
open
|
2025-03-21T14:03:46Z
|
2025-03-24T09:09:56Z
|
https://github.com/deepset-ai/haystack/issues/9088
|
[
"type:documentation",
"P1"
] |
dfokina
| 0
|
neuml/txtai
|
nlp
| 65
|
Add summary pipeline
|
Use Hugging Face's summary pipeline to summarize text.
|
closed
|
2021-03-17T18:52:12Z
|
2021-05-13T15:07:26Z
|
https://github.com/neuml/txtai/issues/65
|
[] |
davidmezzetti
| 0
|
iterative/dvc
|
machine-learning
| 10,572
|
get / import: No storage files available
|
# Bug Report
## Description
I have tracked files in `repo-a` under `data`. `dvc import` and `dvc get` both fail when trying to get files from `repo-a` in `repo-b`.
### Reproduce
I cloned my own repo (`repo-a`) under `/tmp` to test whether `dvc pull` works. It does. Then I checked status and remote:
```
[/tmp/repo-a] [master *]
-> % uv run dvc status -c
Cache and remote 'azure-blob' are in sync.
[/tmp/repo-a] [master *]
-> % uv run dvc list --dvc-only .
data
```
So that is all correct.
Then I go to my `repo-b`. I configured the remote to be the same as the one of `rebo-a`. Here is the check:
```
[repo-b] [master *]
-> % diff .dvc/config.local /tmp/repo-a/.dvc/config.local | wc -l
0
```
Then I try to get the data from `repo-a`. It fails
```
[repo-b] [master *]
-> % uv run dvc list "git@gitlab.com:<org>/repo-a.git" --dvc-only
data
[repo-b] [master *]
-> % uv run dvc get "git@gitlab.com:<org>/repo-a.git" "data" -v
2024-09-30 13:41:19,905 DEBUG: v3.55.2 (pip), CPython 3.10.14 on Linux-6.8.0-45-generic-x86_64-with-glibc2.35
2024-09-30 13:41:19,906 DEBUG: command: /.../repo-b/.venv/bin/dvc get git@gitlab.com:<org>/repo-a.git data -v
2024-09-30 13:41:19,985 DEBUG: Creating external repo git@gitlab.com:<org>/repo-a.git@None
2024-09-30 13:41:19,985 DEBUG: erepo: git clone 'git@gitlab.com:<org>/repo-a.git' to a temporary dir
2024-09-30 13:41:42,394 DEBUG: failed to load ('data', 'cvat', 'datumaro-dataset') from storage local (/tmp/tmpsuoa_qcgdvc-cache/files/md5) - [Errno 2] No such file or directory: '/tmp/tmpsuoa_qcgdvc-cache/files/md5/8a/6de34918ed22935e97644bf465f920.dir'
Traceback (most recent call last):
File "/.../repo-b/.venv/lib/python3.10/site-packages/dvc_data/index/index.py", line 611, in _load_from_storage
_load_from_object_storage(trie, entry, storage)
File "/.../repo-b/.venv/lib/python3.10/site-packages/dvc_data/index/index.py", line 547, in _load_from_object_storage
obj = Tree.load(storage.odb, root_entry.hash_info, hash_name=storage.odb.hash_name)
File "/.../repo-b/.venv/lib/python3.10/site-packages/dvc_data/hashfile/tree.py", line 193, in load
with obj.fs.open(obj.path, "r") as fobj:
File "/.../repo-b/.venv/lib/python3.10/site-packages/dvc_objects/fs/base.py", line 324, in open
return self.fs.open(path, mode=mode, **kwargs)
File "/.../repo-b/.venv/lib/python3.10/site-packages/dvc_objects/fs/local.py", line 131, in open
return open(path, mode=mode, encoding=encoding) # noqa: SIM115
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmpsuoa_qcgdvc-cache/files/md5/8a/6de34918ed22935e97644bf465f920.dir'
2024-09-30 13:41:42,401 ERROR: unexpected error - failed to load directory ('data', 'cvat', 'datumaro-dataset'): [Errno 2] No such file or directory: '/tmp/tmpsuoa_qcgdvc-cache/files/md5/8a/6de34918ed22935e97644bf465f920.dir'
Traceback (most recent call last):
File "/.../repo-b/.venv/lib/python3.10/site-packages/dvc_data/index/index.py", line 611, in _load_from_storage
_load_from_object_storage(trie, entry, storage)
File "/.../repo-b/.venv/lib/python3.10/site-packages/dvc_data/index/index.py", line 547, in _load_from_object_storage
obj = Tree.load(storage.odb, root_entry.hash_info, hash_name=storage.odb.hash_name)
File "/.../repo-b/.venv/lib/python3.10/site-packages/dvc_data/hashfile/tree.py", line 193, in load
with obj.fs.open(obj.path, "r") as fobj:
File "/.../repo-b/.venv/lib/python3.10/site-packages/dvc_objects/fs/base.py", line 324, in open
return self.fs.open(path, mode=mode, **kwargs)
File "/.../repo-b/.venv/lib/python3.10/site-packages/dvc_objects/fs/local.py", line 131, in open
return open(path, mode=mode, encoding=encoding) # noqa: SIM115
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmpsuoa_qcgdvc-cache/files/md5/8a/6de34918ed22935e97644bf465f920.dir'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/.../repo-b/.venv/lib/python3.10/site-packages/dvc/cli/__init__.py", line 211, in main
ret = cmd.do_run()
File "/.../repo-b/.venv/lib/python3.10/site-packages/dvc/cli/command.py", line 41, in do_run
return self.run()
File "/.../repo-b/.venv/lib/python3.10/site-packages/dvc/commands/get.py", line 30, in run
return self._get_file_from_repo()
File "/.../repo-b/.venv/lib/python3.10/site-packages/dvc/commands/get.py", line 37, in _get_file_from_repo
Repo.get(
File "/.../repo-b/.venv/lib/python3.10/site-packages/dvc/repo/get.py", line 64, in get
download(fs, fs_path, os.path.abspath(out), jobs=jobs)
File "/.../repo-b/.venv/lib/python3.10/site-packages/dvc/fs/__init__.py", line 67, in download
return fs._get(fs_path, to, batch_size=jobs, callback=cb)
File "/.../repo-b/.venv/lib/python3.10/site-packages/dvc/fs/dvc.py", line 692, in _get
return self.fs._get(
File "/.../repo-b/.venv/lib/python3.10/site-packages/dvc/fs/dvc.py", line 543, in _get
for root, dirs, files in self.walk(rpath, maxdepth=maxdepth, detail=True):
File "/.../repo-b/.venv/lib/python3.10/site-packages/fsspec/spec.py", line 468, in walk
yield from self.walk(
File "/.../repo-b/.venv/lib/python3.10/site-packages/fsspec/spec.py", line 468, in walk
yield from self.walk(
File "/.../repo-b/.venv/lib/python3.10/site-packages/fsspec/spec.py", line 427, in walk
listing = self.ls(path, detail=True, **kwargs)
File "/.../repo-b/.venv/lib/python3.10/site-packages/dvc/fs/dvc.py", line 382, in ls
for info in dvc_fs.ls(dvc_path, detail=True):
File "/.../repo-b/.venv/lib/python3.10/site-packages/dvc_objects/fs/base.py", line 519, in ls
return self.fs.ls(path, detail=detail, **kwargs)
File "/.../repo-b/.venv/lib/python3.10/site-packages/dvc_data/fs.py", line 164, in ls
for key, info in self.index.ls(root_key, detail=True):
File "/.../repo-b/.venv/lib/python3.10/site-packages/dvc_data/index/index.py", line 764, in ls
self._ensure_loaded(root_key)
File "/.../repo-b/.venv/lib/python3.10/site-packages/dvc_data/index/index.py", line 761, in _ensure_loaded
self._load(prefix, entry)
File "/.../repo-b/.venv/lib/python3.10/site-packages/dvc_data/index/index.py", line 710, in _load
self.onerror(entry, exc)
File "/.../repo-b/.venv/lib/python3.10/site-packages/dvc_data/index/index.py", line 638, in _onerror
raise exc
File "/.../repo-b/.venv/lib/python3.10/site-packages/dvc_data/index/index.py", line 708, in _load
_load_from_storage(self._trie, entry, storage_info)
File "/.../repo-b/.venv/lib/python3.10/site-packages/dvc_data/index/index.py", line 626, in _load_from_storage
raise DataIndexDirError(f"failed to load directory {entry.key}") from last_exc
dvc_data.index.index.DataIndexDirError: failed to load directory ('data', 'cvat', 'datumaro-dataset')
2024-09-30 13:41:42,432 DEBUG: Version info for developers:
DVC version: 3.55.2 (pip)
-------------------------
Platform: Python 3.10.14 on Linux-6.8.0-45-generic-x86_64-with-glibc2.35
Subprojects:
dvc_data = 3.16.6
dvc_objects = 5.1.0
dvc_render = 1.0.2
dvc_task = 0.4.0
scmrepo = 3.3.8
Supports:
azure (adlfs = 2024.7.0, knack = 0.12.0, azure-identity = 1.18.0),
http (aiohttp = 3.10.8, aiohttp-retry = 2.8.3),
https (aiohttp = 3.10.8, aiohttp-retry = 2.8.3)
Config:
Global: /.../.config/dvc
System: /.../.config/kdedefaults/dvc
Cache types: <https://error.dvc.org/no-dvc-cache>
Caches: local
Remotes: azure
Workspace directory: ext4 on /dev/mapper/vgkubuntu-root
Repo: dvc, git
Repo.site_cache_dir: /var/tmp/dvc/repo/bdf5f37be5108aada94933a567e64744
Having any troubles? Hit us up at https://dvc.org/support, we are always happy to help!
2024-09-30 13:41:42,433 DEBUG: Analytics is enabled.
2024-09-30 13:41:42,458 DEBUG: Trying to spawn ['daemon', 'analytics', '/tmp/tmpwllf0ijo', '-v']
2024-09-30 13:41:42,465 DEBUG: Spawned ['daemon', 'analytics', '/tmp/tmpwllf0ijo', '-v'] with pid 111408
2024-09-30 13:41:42,466 DEBUG: Removing '/tmp/tmp5s8bt4cedvc-clone'
2024-09-30 13:41:42,495 DEBUG: Removing '/tmp/tmpsuoa_qcgdvc-cache'
```
Then I tried if I can push from `repo-b`. I can.
```
[repo-b] [master *]
-> % touch test
-> % uv run dvc push
Collecting
|1.00 [00:00, 234entry/s]
Pushing
1 file pushed
```
Same problem when I target a specific file:
```
[repo-b] [master *]
-> % uv run dvc get "git@gitlab.com:<org>/repo-a.git" "data/master-table.csv"
ERROR: unexpected error - [Errno 2] No storage files available: 'data/master-table.csv'
```
But the file IS on the remote. I can pull it in the cloned `repo-a`.
Also, see this:
```
-> % uv run dvc get git@gitlab.com:<org>/repo-a.git data
ERROR: unexpected error - failed to load directory ('data', 'cvat', 'datumaro-dataset'): [Errno 2] No such file or directory: '/tmp/tmp_tgyr2ymdvc-cache/files/md5/8a/6de34918ed22935e97644bf465f920.dir'
```
This file (`files/md5/8a/6de34918ed22935e97644bf465f920.dir`) DOES exist on the remote!
### Environment information
```
-> % uv pip list G dvc
dvc 3.55.2
dvc-data 3.16.5
dvc-http 2.32.0
dvc-objects 5.1.0
dvc-render 1.0.2
dvc-studio-client 0.21.0
dvc-task 0.4.0
-> % uname -a
Linux <name> 6.8.0-45-generic #45~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Wed Sep 11 15:25:05 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
-> % python --version
Python 3.10.13
```
**Output of `dvc doctor`:**
```console
DVC version: 3.55.2 (pip)
-------------------------
Platform: Python 3.10.14 on Linux-6.8.0-45-generic-x86_64-with-glibc2.35
Subprojects:
dvc_data = 3.16.6
dvc_objects = 5.1.0
dvc_render = 1.0.2
dvc_task = 0.4.0
scmrepo = 3.3.8
Supports:
azure (adlfs = 2024.7.0, knack = 0.12.0, azure-identity = 1.18.0),
http (aiohttp = 3.10.8, aiohttp-retry = 2.8.3),
https (aiohttp = 3.10.8, aiohttp-retry = 2.8.3)
Config:
Global: /home/mbs/.config/dvc
System: /home/mbs/.config/kdedefaults/dvc
Cache types: <https://error.dvc.org/no-dvc-cache>
Caches: local
Remotes: azure
Workspace directory: ext4 on /dev/mapper/vgkubuntu-root
Repo: dvc, git
Repo.site_cache_dir: /var/tmp/dvc/repo/bdf5f37be5108aada94933a567e64744
```
I already deleted `/var/tmp/dvc/`. Did not help.
|
closed
|
2024-09-30T12:26:41Z
|
2024-11-10T01:53:45Z
|
https://github.com/iterative/dvc/issues/10572
|
[
"awaiting response",
"triage"
] |
mbspng
| 10
|
tatsu-lab/stanford_alpaca
|
deep-learning
| 72
|
running the project.
|
så i downloaded and installed the requirements. i noticed utils.py is not written in normal python or at least im getting syntax error. when i run the code i get this error
`valiantlynx@DESKTOP-3EGT6DL:~/stanford_alpaca$ /usr/bin/python3 /home/valiantlynx/stanford_alpaca/train.py
/usr/lib/python3/dist-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.26.15) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
2023-03-17 01:44:16.830274: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F AVX512_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-03-17 01:44:17.316763: I tensorflow/core/util/port.cc:104] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-03-17 01:44:18.295000: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2023-03-17 01:44:18.295070: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
2023-03-17 01:44:18.295091: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
Traceback (most recent call last):
File "/home/valiantlynx/stanford_alpaca/train.py", line 25, in <module>
import utils
File "/home/valiantlynx/stanford_alpaca/utils.py", line 40, in <module>
prompts: Union[str, Sequence[str], Sequence[dict[str, str]], dict[str, str]],
TypeError: 'type' object is not subscriptable`
what i ran was train.py. i diddnt edit anything. it comes from this fuction
`def openai_completion(
prompts: Union[str, Sequence[str], Sequence[dict[str, str]], dict[str, str]],
decoding_args: OpenAIDecodingArguments,
model_name="text-davinci-003",
sleep_time=2,
batch_size=1,
max_instances=sys.maxsize,
max_batches=sys.maxsize,
return_text=False,
**decoding_kwargs,
) -> Union[Union[StrOrOpenAIObject], Sequence[StrOrOpenAIObject], Sequence[Sequence[StrOrOpenAIObject]],]:
"""Decode with OpenAI API.
Args:
prompts: A string or a list of strings to complete. If it is a chat model the strings should be formatted
as explained here: https://github.com/openai/openai-python/blob/main/chatml.md. If it is a chat model
it can also be a dictionary (or list thereof) as explained here:
https://github.com/openai/openai-cookbook/blob/main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb
decoding_args: Decoding arguments.
model_name: Model name. Can be either in the format of "org/model" or just "model".
sleep_time: Time to sleep once the rate-limit is hit.
batch_size: Number of prompts to send in a single request. Only for non chat model.
max_instances: Maximum number of prompts to decode.
max_batches: Maximum number of batches to decode. This argument will be deprecated in the future.
return_text: If True, return text instead of full completion object (which contains things like logprob).
decoding_kwargs: Additional decoding arguments. Pass in `best_of` and `logit_bias` if you need them.
Returns:
A completion or a list of completions.
Depending on return_text, return_openai_object, and decoding_args.n, the completion type can be one of
- a string (if return_text is True)
- an openai_object.OpenAIObject object (if return_text is False)
- a list of objects of the above types (if decoding_args.n > 1)
"""
is_single_prompt = isinstance(prompts, (str, dict))
if is_single_prompt:
prompts = [prompts]
if max_batches < sys.maxsize:
logging.warning(
"`max_batches` will be deprecated in the future, please use `max_instances` instead."
"Setting `max_instances` to `max_batches * batch_size` for now."
)
max_instances = max_batches * batch_size
prompts = prompts[:max_instances]
num_prompts = len(prompts)
prompt_batches = [
prompts[batch_id * batch_size : (batch_id + 1) * batch_size]
for batch_id in range(int(math.ceil(num_prompts / batch_size)))
]
completions = []
for batch_id, prompt_batch in tqdm.tqdm(
enumerate(prompt_batches),
desc="prompt_batches",
total=len(prompt_batches),
):
batch_decoding_args = copy.deepcopy(decoding_args) # cloning the decoding_args
while True:
try:
shared_kwargs = dict(
model=model_name,
**batch_decoding_args.__dict__,
**decoding_kwargs,
)
completion_batch = openai.Completion.create(prompt=prompt_batch, **shared_kwargs)
choices = completion_batch.choices
for choice in choices:
choice["total_tokens"] = completion_batch.usage.total_tokens
completions.extend(choices)
break
except openai.error.OpenAIError as e:
logging.warning(f"OpenAIError: {e}.")
if "Please reduce your prompt" in str(e):
batch_decoding_args.max_tokens = int(batch_decoding_args.max_tokens * 0.8)
logging.warning(f"Reducing target length to {batch_decoding_args.max_tokens}, Retrying...")
else:
logging.warning("Hit request rate limit; retrying...")
time.sleep(sleep_time) # Annoying rate limit on requests.
if return_text:
completions = [completion.text for completion in completions]
if decoding_args.n > 1:
# make completions a nested list, where each entry is a consecutive decoding_args.n of original entries.
completions = [completions[i : i + decoding_args.n] for i in range(0, len(completions), decoding_args.n)]
if is_single_prompt:
# Return non-tuple if only 1 input and 1 generation.
(completions,) = completions
return completions
`
im not the best at python, but ive not seen this syntax
`prompts: Union[str, Sequence[str], Sequence[dict[str, str]], dict[str, str]],`
`) -> Union[Union[StrOrOpenAIObject], Sequence[StrOrOpenAIObject], Sequence[Sequence[StrOrOpenAIObject]],]:`
im very new in ml so maybe im doing everything wrong.
|
closed
|
2023-03-17T00:47:40Z
|
2023-09-05T20:26:07Z
|
https://github.com/tatsu-lab/stanford_alpaca/issues/72
|
[] |
valiantlynx
| 6
|
keras-team/keras
|
tensorflow
| 20,463
|
BackupAndRestore callback sometimes can't load checkpoint
|
When training interrupts, sometimes model can't restore weights back with BackupAndRestore callback.
```python
Traceback (most recent call last):
File "/home/alex/jupyter/lab/model_fba.py", line 150, in <module>
model.fit(train_dataset, callbacks=callbacks, epochs=NUM_EPOCHS, steps_per_epoch=STEPS_PER_EPOCH, verbose=2)
File "/home/alex/.local/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py", line 113, in error_handler
return fn(*args, **kwargs)
File "/home/alex/.local/lib/python3.10/site-packages/keras/src/backend/tensorflow/trainer.py", line 311, in fit
callbacks.on_train_begin()
File "/home/alex/.local/lib/python3.10/site-packages/keras/src/callbacks/callback_list.py", line 218, in on_train_begin
callback.on_train_begin(logs)
File "/home/alex/.local/lib/python3.10/site-packages/keras/src/callbacks/backup_and_restore.py", line 116, in on_train_begin
self.model.load_weights(self._weights_path)
File "/home/alex/.local/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py", line 113, in error_handler
return fn(*args, **kwargs)
File "/home/alex/.local/lib/python3.10/site-packages/keras/src/models/model.py", line 353, in load_weights
saving_api.load_weights(
File "/home/alex/.local/lib/python3.10/site-packages/keras/src/saving/saving_api.py", line 251, in load_weights
saving_lib.load_weights_only(
File "/home/alex/.local/lib/python3.10/site-packages/keras/src/saving/saving_lib.py", line 550, in load_weights_only
weights_store = H5IOStore(filepath, mode="r")
File "/home/alex/.local/lib/python3.10/site-packages/keras/src/saving/saving_lib.py", line 931, in __init__
self.h5_file = h5py.File(root_path, mode=self.mode)
File "/home/alex/.local/lib/python3.10/site-packages/h5py/_hl/files.py", line 561, in __init__
fid = make_fid(name, mode, userblock_size, fapl, fcpl, swmr=swmr)
File "/home/alex/.local/lib/python3.10/site-packages/h5py/_hl/files.py", line 235, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 102, in h5py.h5f.open
OSError: Unable to synchronously open file (bad object header version number)
```
|
closed
|
2024-11-07T05:57:29Z
|
2024-11-11T16:49:36Z
|
https://github.com/keras-team/keras/issues/20463
|
[
"type:Bug"
] |
shkarupa-alex
| 1
|
ShishirPatil/gorilla
|
api
| 45
|
[bug] Hosted Gorilla: <Issue>
|
Exception: Error communicating with OpenAI: HTTPConnectionPool(host='34.132.127.197', port=8000): Max retries exceeded with url: /v1/chat/completions (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fa8bdf53c40>: Failed to establish a new connection: [Errno 111] Connection refused'))
Failed model: gorilla-7b-hf-v0, for prompt: I would like to translate from English to Chinese
|
closed
|
2023-06-09T04:12:38Z
|
2023-06-09T05:11:29Z
|
https://github.com/ShishirPatil/gorilla/issues/45
|
[
"hosted-gorilla"
] |
luismanriqueruiz
| 1
|
indico/indico
|
sqlalchemy
| 6,127
|
Add per-keyword coloring and legend in event calendar view
|
### Is your feature request related to a problem?
With #6105 and #6106 it will be possible to display event in the event calendar view and its legend based on category and location. It would be useful to also present the legend and color the event based on event keywords.
### Describe the solution you'd like
Add "keyword" to the legend dropdrown, allowing users to select coloring events in the calendar by keyword. Since events can have zero, one or more than one keyword, the legend should include two special items:
- [No keyword]
- [Multiple keywords]
In order to distinguish these special items, we could:
1. Display them always on top of the list.
2. Render text differently (e.g. italic?).
### Describe alternatives you've considered
One problem with this feature is that event keywords can be arbitrary. The list, hence, can grow pretty large. Based on my discussion with @ThiefMaster last Wednesday, this feature should:
1. Be implemented alongside the possibility to define a list of curated event keywords in the admin settings, perhaps in the `/admin/event-labels/` view.
2. Choosing "keyword" in the event calendar legend should be enabled only when the list of keywords is curated.
Some thoughts for point `1.`:
- Clarify the difference between event keywords in event labels both in `/admin/event-labels/` view and the `/event/<event_id>/manage/` view.
- Once there is a curated list of keywords, the widget in the `/event/<event_id>/manage/` view should be a dropdown. This dropdown could be the same as the one used for choosing labels in event registrations.
**Additional context**
The feature is a stepping stone for filtering events in the calendar by location.
This feature request comes inspired by [Perimeter Institute](https://perimeterinstitute.ca/)'s in-house developed calendar view:
<img width="1272" alt="image" src="https://github.com/indico/indico/assets/716307/1cca5b7b-da74-47a4-91ed-cdc514ede47e">
|
closed
|
2024-01-12T16:51:16Z
|
2024-03-05T17:14:02Z
|
https://github.com/indico/indico/issues/6127
|
[
"enhancement"
] |
OmeGak
| 0
|
microsoft/MMdnn
|
tensorflow
| 584
|
tf->caffe error Tensorflow has not supported operator [Softmax] with name [eval_prediction]
|
Platform (ubuntu 16.04):
Python version:3.6
Source framework with version (tensorflow-gpu 1.12.0):
Destination framework with version (caffe-gpu 1.0 ):
(py3-mmdnn) root@ubuntu:/home/j00465446/test/ckpt_model# mmconvert -sf tensorflow -in my_model.ckpt.meta -iw my_model.ckpt --dstNode eval_prediction -df caffe -om outmodel
Parse file [my_model.ckpt.meta] with binary format successfully.
Tensorflow model file [my_model.ckpt.meta] loaded successfully.
Tensorflow checkpoint file [my_model.ckpt] loaded successfully. [27] variables loaded.
**Tensorflow has not supported operator [Softmax] with name [eval_prediction].**
IR network structure is saved as [2d6f6575a1414bd1a89c5db0b01b93b8.json].
IR network structure is saved as [2d6f6575a1414bd1a89c5db0b01b93b8.pb].
IR weights are saved as [2d6f6575a1414bd1a89c5db0b01b93b8.npy].
Parse file [2d6f6575a1414bd1a89c5db0b01b93b8.pb] with binary format successfully.
Target network code snippet is saved as [2d6f6575a1414bd1a89c5db0b01b93b8.py].
Target weights are saved as [2d6f6575a1414bd1a89c5db0b01b93b8.npy].
**WARNING: Logging before InitGoogleLogging() is written to STDERR
E0213 15:17:43.837237 49862 common.cpp:114] Cannot create Cublas handle. Cublas won't be available.
E0213 15:17:43.837512 49862 common.cpp:121] Cannot create Curand generator. Curand won't be** available.
I0213 15:17:43.837596 49862 net.cpp:51] Initializing net from parameters:
state {
phase: TRAIN
level: 0
}
layer {
name: "eval_data"
type: "Input"
top: "eval_data"
input_param {
shape {
dim: 1
dim: 4
dim: 1000
dim: 1
}
}
}
layer {
name: "Conv2D"
type: "Convolution"
bottom: "eval_data"
top: "Conv2D"
convolution_param {
num_output: 16
bias_term: true
group: 1
stride: 1
pad_h: 0
pad_w: 0
kernel_h: 5
kernel_w: 1
}
}
layer {
name: "Relu"
type: "ReLU"
bottom: "Conv2D"
top: "Conv2D"
}
layer {
name: "MaxPool"
type: "Pooling"
bottom: "Conv2D"
top: "MaxPool"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
pad_h: 0
pad_w: 0
}
}
layer {
name: "Conv2D_1"
type: "Convolution"
bottom: "MaxPool"
top: "Conv2D_1"
convolution_param {
num_output: 16
bias_term: true
group: 1
stride: 1
pad_h: 0
pad_w: 0
kernel_h: 5
kernel_w: 1
}
}
layer {
name: "Relu_1"
type: "ReLU"
bottom: "Conv2D_1"
top: "Conv2D_1"
}
layer {
name: "MaxPool_1"
type: "Pooling"
bottom: "Conv2D_1"
top: "MaxPool_1"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
pad_h: 0
pad_w: 0
}
}
layer {
name: "fc_feature"
type: "Flatten"
bottom: "MaxPool_1"
top: "fc_feature"
}
layer {
name: "MatMul"
type: "InnerProduct"
bottom: "fc_feature"
top: "MatMul"
inner_product_param {
num_output: 64
bias_term: true
}
}
layer {
name: "Relu_2"
type: "ReLU"
bottom: "MatMul"
top: "MatMul"
}
layer {
name: "MatMul_1"
type: "InnerProduct"
bottom: "MatMul"
top: "MatMul_1"
inner_product_param {
num_output: 2
bias_term: true
}
}
I0213 15:17:43.837741 49862 layer_factory.hpp:77] Creating layer eval_data
I0213 15:17:43.837772 49862 net.cpp:84] Creating Layer eval_data
I0213 15:17:43.837781 49862 net.cpp:380] eval_data -> eval_data
I0213 15:17:43.837941 49862 net.cpp:122] Setting up eval_data
I0213 15:17:43.837960 49862 net.cpp:129] Top shape: 1 4 1000 1 (4000)
I0213 15:17:43.837965 49862 net.cpp:137] Memory required for data: 16000
I0213 15:17:43.837976 49862 layer_factory.hpp:77] Creating layer Conv2D
I0213 15:17:43.837991 49862 net.cpp:84] Creating Layer Conv2D
I0213 15:17:43.837997 49862 net.cpp:406] Conv2D <- eval_data
I0213 15:17:43.838013 49862 net.cpp:380] Conv2D -> Conv2D
**F0213 15:17:43.838270 49862 cudnn_conv_layer.cpp:52] Check failed: error == cudaSuccess (35 vs. 0) CUDA driver version is insufficient for CUDA runtime version**
*** Check failure stack trace: ***
Aborted (core dumped)
|
closed
|
2019-02-13T07:29:40Z
|
2019-02-14T02:08:07Z
|
https://github.com/microsoft/MMdnn/issues/584
|
[] |
jq460494839
| 4
|
yt-dlp/yt-dlp
|
python
| 12,080
|
Could downmixing stereo audio to mono audio be executed from the Server side?
|
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm asking a question and **not** reporting a bug or requesting a feature
- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Please make sure the question is worded well enough to be understood
First of all, thank you for this wonderful little program. I began using it only today.
Earlier, I had tried using the Debian channel package without success. YouTube changes the codes so quickly that by the time Debian releases a revised version it is already too old to function as intended.
This time I used the updated version with python using `pip install`. So my project was successful.
I have to confirm that the following code downmixes stereo audio to mono audio::
`yt-dlp -x --audio-format ogg --postprocessor-args "-ac 1" <URL>`
However, this is using the system's `ffmpeg` program on the already downloaded data bytes to downmix audio samples to mono samples.
Therefore, I was considering if the downmixing could occur right from the server side itself? Through a cloud application to do the same.
### Provide verbose output that clearly demonstrates the problem
There is no problem. The program works perfectly well. Hence, the the requirement to respond to the questions below doesn't arise.
- [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
Not Applicable.
|
closed
|
2025-01-14T16:24:37Z
|
2025-01-16T16:35:41Z
|
https://github.com/yt-dlp/yt-dlp/issues/12080
|
[
"invalid"
] |
rajibando
| 4
|
sebp/scikit-survival
|
scikit-learn
| 277
|
Get the number of iterations needed to train a model
|
**Is your feature request related to a problem? Please describe.**
I am doing some training time measurements of the `FastKernelSurvivalSVM` model. The problem is that I would like to know how much time it takes for each iteration, but I don't know this amount. It would be nice if the model has the attribute to be able to access it.
**Describe the solution you'd like**
Expose a new attribute (for example, `n_iter`, similarly to modern Scikit-Learn versions, where they [added the n_iter_](https://github.com/scikit-learn/scikit-learn/pull/21408) attribute).
**Describe alternatives you've considered**
I'm accessing the number of iterations through the `optimizer_result_` attribute:
```python
estimator = FastKernelSurvivalSVM(rank_ratio=0.0, max_iter=1000, tol=1e-5)
# Trains the model...
estimator.optimizer_result_.nit
```
But I'm not really convinced if what I'm doing is right. It'd be cool to have an official solution.
Thanks in advance, this is a great lib :heart:
|
closed
|
2022-06-29T16:51:56Z
|
2022-08-14T07:55:19Z
|
https://github.com/sebp/scikit-survival/issues/277
|
[
"enhancement"
] |
Genarito
| 2
|
scikit-optimize/scikit-optimize
|
scikit-learn
| 401
|
HDF5 Support
|
Hi all,
If possible, I would like to add support for saving optimization results in HDF5 format. I have been running many hyperparameter optimization procedures and have been thinking that, rather dumping many pickle files, it would be nice to write to HDF5 with h5py.
Has anyone had a go at this? Would there be any immediate pitfalls/concerns for this approach?
|
open
|
2017-06-14T21:29:51Z
|
2017-06-22T08:20:16Z
|
https://github.com/scikit-optimize/scikit-optimize/issues/401
|
[
"API"
] |
yngtodd
| 12
|
xuebinqin/U-2-Net
|
computer-vision
| 306
|
Background removal module
|
_Hi @xuebinqin_
I recently created a module to quickly embed U2Net into my project. Maybe it will be useful for someone
https://github.com/kos94ok/bgRemoverApp
|
open
|
2022-05-18T12:16:04Z
|
2022-10-25T04:53:52Z
|
https://github.com/xuebinqin/U-2-Net/issues/306
|
[] |
kos94ok
| 2
|
jumpserver/jumpserver
|
django
| 14,354
|
[Question] 为什么我看不了自己作业的输出
|
### Product Version
v4.3.0
### Product Edition
- [X] Community Edition
- [ ] Enterprise Edition
- [ ] Enterprise Trial Edition
### Installation Method
- [X] Online Installation (One-click command installation)
- [ ] Offline Package Installation
- [ ] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] Source Code
### Environment Information
为什么我没有权限看自己的作业的输出
### 🤔 Question Description
为什么我没有权限看自己的作业的输出
### Expected Behavior
_No response_
### Additional Information
_No response_
|
closed
|
2024-10-24T02:40:41Z
|
2024-11-22T02:01:45Z
|
https://github.com/jumpserver/jumpserver/issues/14354
|
[
"⏳ Pending feedback",
"🤔 Question",
"📦 z~release:v4.4.0"
] |
minorCoder
| 10
|
amidaware/tacticalrmm
|
django
| 855
|
Update Demo
|
Hello,
is it possible that you update the demo to the latest version?
Thanks
|
closed
|
2021-12-11T15:02:57Z
|
2021-12-23T08:08:27Z
|
https://github.com/amidaware/tacticalrmm/issues/855
|
[] |
MalteKiefer
| 5
|
AirtestProject/Airtest
|
automation
| 1,063
|
已存在的图片报错File not exist问题
|
:bulb:**相关项目:** Airtest
**标题:** [BUG提交]已存在的图片报错File not exist问题
**AirtestIDE版本:** 1.2.11
- [x] **使用了本地Pyhton环境运行脚本**
- Python版本: 3.6.5
- Airtest版本: v1.2.4
**报错描述:**
该问题为偶现bug,但如果一次出现后,次次必现。
我使用airtest测某个工具,如果突然有一次工具升级,某些地方可能伴随UI的修改导致识图不准确,此处应该报错not find in screen,但是报的却是File not exist,且这张图片确实存在于.air文件夹中。
注:正常情况下脚本会有30分钟限制,如果超过30分钟会timeout,但是该错误会导致脚本一直卡在这里循环
**相关截图:**

**报错Log:**
```
无
```
**连接设备信息:**
无
**提供最小可复现此BUG的代码:**
```
建议重点排查代码:swipe(pic1, pic2)这个函数相关的代码
例如swipe(pic1, pic2),由于pic1识别不到了,却是报File not exist: pic2.png
临时解决方法:重新截图pic1和pic2
```
|
closed
|
2022-07-05T08:16:32Z
|
2022-09-16T07:46:51Z
|
https://github.com/AirtestProject/Airtest/issues/1063
|
[] |
wjwABCDEFG
| 2
|
davidteather/TikTok-Api
|
api
| 284
|
DownloadTiktok.py Example is not working
|
With any url video, it has an exception :
Traceback (most recent call last):
File "c:/Users/User/Desktop/runing_example.py", line 6, in <module>
tiktokData = api.get_Video_By_Url("https://www.tiktok.com/@user/video/000example000", return_bytes=1)
File "C:\Python\lib\site-packages\TikTokApi\tiktok.py", line 909, in get_Video_By_Url
raise Exception("Deprecated. Other Methods Work Better.")
Exception: Deprecated. Other Methods Work Better.
|
closed
|
2020-10-04T15:12:49Z
|
2020-10-05T22:35:31Z
|
https://github.com/davidteather/TikTok-Api/issues/284
|
[
"bug"
] |
DonPepeSun
| 2
|
LAION-AI/Open-Assistant
|
python
| 2,800
|
500 - Open Assistent EROR
|
Sorry, we encountered a server error. We're not sure what went wrong.
Very often an error began to appear with smaller dialogs
If you tried to open a web page and saw an Internal Server Error 500, you can do the following.
Wait...
Notify administrator...
Check htaccess file. ...
Check error log...
Check contents of CGI scripts...
Check plugins and components...
Increase server RAM
Очень часто стала появляться ошибка при меньших диалогах
Если вы попытались открыть веб-страницу, но увидели Internal Server Error 500, можно сделать следующее.
Подождать ...
Сообщить администратору ...
Проверить файл htaccess. ...
Проверить лог ошибок ...
Проверить содержимое CGI-скриптов ...
Проверить плагины и компоненты ...
Увеличить объем оперативной памяти сервера
|
closed
|
2023-04-21T05:17:33Z
|
2023-04-21T08:05:03Z
|
https://github.com/LAION-AI/Open-Assistant/issues/2800
|
[] |
buddhadhammaliveexpedition
| 1
|
dynaconf/dynaconf
|
django
| 299
|
[RFC] Allow more merging strategies
|
Based on #289 implement a way to override strategies for merging
Comments below:
|
closed
|
2020-02-26T17:34:51Z
|
2024-07-08T18:27:32Z
|
https://github.com/dynaconf/dynaconf/issues/299
|
[
"wontfix",
"Not a Bug",
"RFC",
"HIGH",
"django"
] |
rochacbruno
| 13
|
ansible/ansible
|
python
| 84,604
|
powershell edition is switched to powershell.exe with become even with pwsh.exe
|
### Summary
When using `PowerShell.7` as the remote configuration for Windows, `become: true` will always use `powershell.exe`, which breaks scripts that only work with PowerShell-7.
### Issue Type
Bug Report
### Component Name
runas
### Ansible Version
```console
ansible [core 2.17.5]
config file = /home/yzhao/ansible/ansible.cfg
configured module search path = ['/home/yzhao/ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/yzhao/ansible/.venv/lib/python3.12/site-packages/ansible
ansible collection location = /home/yzhao/ansible/collections:/usr/share/ansible/collections
executable location = /home/yzhao/ansible/.venv/bin/ansible
python version = 3.12.3 (main, Sep 11 2024, 14:17:37) [GCC 13.2.0] (/home/yzhao/ansible/.venv/bin/python3)
jinja version = 3.1.4
libyaml = True
```
### Configuration
```console
ANSIBLE_HOME(/home/yzhao/ansible/ansible.cfg) = /home/yzhao/ansible
CONFIG_FILE() = /home/yzhao/ansible/ansible.cfg
DEFAULT_HOST_LIST(/home/yzhao/ansible/ansible.cfg) = ['/home/yzhao/ansible/hosts', '/home/yzhao/ansible/hosts.local.yml']
```
### OS / Environment
Linux SB-PC-YZHAO4 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Distributor ID: Ubuntu
Description: Ubuntu 24.04.1 LTS
Release: 24.04
Codename: noble
### Steps to Reproduce
Using `ansible_psrp_configuration_name=PowerShell.7` to explicitly request `pwsh.exe` for the whole session:
```yaml (paste below)
# pwsh_test.yml
- name: Powershell tests
hosts: SB-PC-IPBUILD3.skyboxlabs.com
gather_facts: false
vars:
ansible_psrp_configuration_name: PowerShell.7
tasks:
- name: Print version
ansible.windows.win_powershell:
script: echo $PSVersionTable.PSVersion.Major
check_mode: false
```
### Expected Results
(some formatting changed for brevity)
Expecting `output` to be `7`.
```
$ ansible-playbook pwsh_test.yml --diff -v --check -b
PLAY [Powershell tests] *****************************************************************************************************************************************************
TASK [Print version] *****************************************************************************************************************************************************
changed: [SB-PC-IPBUILD3.skyboxlabs.com] => {"changed": true, "debug": [], "error": [], "host_err": "", "host_out": "", "information": [], "output": [7], "result": {}, "verbose": [], "warning": []}
PLAY RECAP ****************************************************************************************************************************************
SB-PC-IPBUILD3.skyboxlabs.com : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
```console
Expecting `output` is `5`.
$ ansible-playbook pwsh_test.yml --diff -vvvv --check -b
ansible-playbook [core 2.17.5]
config file = /home/yzhao/ansible/ansible.cfg
configured module search path = ['/home/yzhao/ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/yzhao/ansible/.venv/lib/python3.12/site-packages/ansible
ansible collection location = /home/yzhao/ansible/collections:/usr/share/ansible/collections
executable location = /home/yzhao/ansible/.venv/bin/ansible-playbook
python version = 3.12.3 (main, Sep 11 2024, 14:17:37) [GCC 13.2.0] (/home/yzhao/ansible/.venv/bin/python3)
jinja version = 3.1.4
libyaml = True
Using /home/yzhao/ansible/ansible.cfg as config file
setting up inventory plugins
Loading collection ansible.builtin from
host_list declined parsing /home/yzhao/ansible/hosts as it did not pass its verify_file() method
script declined parsing /home/yzhao/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /home/yzhao/ansible/hosts as it did not pass its verify_file() method
Parsed /home/yzhao/ansible/hosts inventory source with yaml plugin
setting up inventory plugins
host_list declined parsing /home/yzhao/ansible/hosts.local.yml as it did not pass its verify_file() method
script declined parsing /home/yzhao/ansible/hosts.local.yml as it did not pass its verify_file() method
Parsed /home/yzhao/ansible/hosts.local.yml inventory source with yaml plugin
Loading collection ansible.windows from /home/yzhao/ansible/.venv/lib/python3.12/site-packages/ansible_collections/ansible/windows
Loading callback plugin default of type stdout, v2.0 from /home/yzhao/ansible/.venv/lib/python3.12/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: pwsh_test.yml *************************************************************************************************************************************************************************************************************************************************
Positional arguments: pwsh_test.yml
verbosity: 4
connection: ssh
become: True
become_method: sudo
tags: ('all',)
check: True
diff: True
inventory: ('/home/yzhao/ansible/hosts', '/home/yzhao/ansible/hosts.local.yml')
forks: 5
1 plays in pwsh_test.yml
PLAY [Powershell tests] *************************************************************************************************************************************************************************************************************************************************
TASK [Print version] ****************************************************************************************************************************************************************************************************************************************************
task path: /home/yzhao/ansible/pwsh_test.yml:7
Using module file /home/yzhao/ansible/.venv/lib/python3.12/site-packages/ansible_collections/ansible/windows/plugins/modules/win_powershell.ps1
Pipelining is enabled.
<SB-PC-IPBUILD3.skyboxlabs.com> ESTABLISH PSRP CONNECTION FOR USER: adm.yzhao@SKYBOXLABS.COM ON PORT 5985 TO SB-PC-IPBUILD3.skyboxlabs.com
PSRP: EXEC (via pipeline wrapper)
changed: [SB-PC-IPBUILD3.skyboxlabs.com] => {
"changed": true,
"debug": [],
"error": [],
"host_err": "",
"host_out": "",
"information": [],
"invocation": {
"module_args": {
"arguments": null,
"chdir": null,
"creates": null,
"depth": 2,
"error_action": "continue",
"executable": null,
"parameters": null,
"removes": null,
"script": "echo $PSVersionTable.PSVersion.Major",
"sensitive_parameters": null
}
},
"output": [
5
],
"result": {},
"verbose": [],
"warning": []
}
PLAY RECAP **************************************************************************************************************************************************************************************************************************************************************
SB-PC-IPBUILD3.skyboxlabs.com : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [x] I agree to follow the Ansible Code of Conduct
|
open
|
2025-01-24T17:23:47Z
|
2025-01-30T17:58:45Z
|
https://github.com/ansible/ansible/issues/84604
|
[
"bug",
"affects_2.17"
] |
yangskyboxlabs
| 4
|
pydantic/FastUI
|
fastapi
| 39
|
Dark Mode
|
I know, I know, but still - it is nice to have.
Shouldn't be too hard to implement - the default implementations just need to change `color` and `background-color`, and bootstrap already has support for dark mode via css variables.
|
open
|
2023-12-01T19:54:53Z
|
2024-05-02T18:39:32Z
|
https://github.com/pydantic/FastUI/issues/39
|
[
"enhancement"
] |
samuelcolvin
| 3
|
aio-libs/aiopg
|
sqlalchemy
| 530
|
How to provide 'on the fly' which postgresql schema i want to use on sqlalchemy queries?
|
I want to select schema name before i did a query.
In traditional sqlalchemy i do:
```python
from sqlalchemy import create_engine
engine = create_engine(...).execution_options(schema_translate_map={None: 'some_schema_name'})
```
And all next queries for each models will be use such path:
```sql
select * from some_schema_name.table;
```
But i faced with error using aiopg:
```python
from aiopg.sa import create_engine
engine = create_engine(...).execution_options
AttributeError: 'Engine' object has no attribute 'execution_options'
```
I was trying to find this in [docs](https://media.readthedocs.org/pdf/aiopg/stable/aiopg.pdf), but not found.
Which way is correct to provide schema name in `aiopg.sa`?
aiopg==0.15.0
SQLAlchemy==1.2.16
Thank you!
|
closed
|
2019-01-22T13:19:21Z
|
2019-01-25T20:40:49Z
|
https://github.com/aio-libs/aiopg/issues/530
|
[] |
nickmetal
| 3
|
ploomber/ploomber
|
jupyter
| 283
|
present at the Argo Workflows community meeting please
|
please come and present
|
closed
|
2021-01-22T22:29:26Z
|
2021-01-25T16:07:28Z
|
https://github.com/ploomber/ploomber/issues/283
|
[] |
alexec
| 1
|
erdewit/ib_insync
|
asyncio
| 576
|
Market data bar datetime - incorrect timezone
|
## Description
When requesting live market data, the bars datetime are not returned accordingly to the TWS timezone
## Steps to reproduce
- Connect to TWS using the US/Eastern timezone.
- Stream market data for an asset with a different TimeZoneId (e.g. `Future("ES") = US/Central`)
- The bars datetime is in the US/Central time
### Script
```
from ib_insync import *
from datetime import datetime
import asyncio
import nest_asyncio
import pytz
def on_new_bar(bars, has_new_bar):
if has_new_bar:
print("Current datetime", datetime.now())
print("Last bar", bars[-1])
async def run():
ib = IB()
ib.TimezoneTWS = pytz.timezone('US/Eastern')
ib.connect('localhost', 4002)
print("tws timezone", ib.TimezoneTWS)
contract = ContFuture("ES", exchange="CME")
data = ib.reqHistoricalData(
contract,
'',
barSizeSetting="1 min",
keepUpToDate=True,
whatToShow="TRADES",
useRTH=True,
durationStr="1 D"
)
data.updateEvent += on_new_bar
while True:
await asyncio.sleep(1)
nest_asyncio.apply()
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.ensure_future(run()))
```
### Output
```
tws timezone US/Eastern
Current datetime 2023-04-14 13:30:05.342473
Last bar BarData(date=datetime.datetime(2023, 4, 14, 12, 30), open=4143.75, high=4143.75, low=4142.25, close=4142.75, volume=995.0, average=4142.845226130653, barCount=254)
```
[Also, the type hint for TimezoneTWS is incorrect, it shouldn't be a str ?](https://github.com/erdewit/ib_insync/blob/fcc7d0bc3544aa96dbe94e9ab68071e5260611c8/ib_insync/ib.py#L197)
```
File "ib_insync/decoder.py", line 450, in execDetails
time = tz.localize(time)
AttributeError: 'str' object has no attribute 'localize'
```
|
closed
|
2023-04-14T17:39:16Z
|
2024-02-11T22:26:42Z
|
https://github.com/erdewit/ib_insync/issues/576
|
[] |
rundef
| 0
|
grok-ai/nn-template
|
streamlit
| 97
|
Switch from pytorch_lightning to lightning
|
We should change all the imports from `pytorch_lightning` to `lightning`
However, naively substituting all the occurrences does not work -- unknown reasons atm.
|
closed
|
2023-10-11T12:53:28Z
|
2023-10-12T19:58:30Z
|
https://github.com/grok-ai/nn-template/issues/97
|
[
"good first issue"
] |
lucmos
| 1
|
chaos-genius/chaos_genius
|
data-visualization
| 590
|
Fix the sorting logic in the KPI and Dashboard
|
Sort the entities in descending order instead of the ascending order.
- [ ] Home Screen
- [ ] Dashboard Card View
|
closed
|
2022-01-14T04:05:15Z
|
2022-02-04T05:24:44Z
|
https://github.com/chaos-genius/chaos_genius/issues/590
|
[
"🐛 bug",
"🖥️ frontend"
] |
manassolanki
| 0
|
allenai/allennlp
|
pytorch
| 4,772
|
GQA dataset reader
|
GQA is here: https://cs.stanford.edu/people/dorarad/gqa/index.html
We want this reader to work much like the VQA reader, so it produces the same kind of output, and we can use the same models with it.
You might have to pull out components that are common to the two readers to avoid too much copy and paste.
|
closed
|
2020-11-06T23:55:18Z
|
2020-12-14T18:22:56Z
|
https://github.com/allenai/allennlp/issues/4772
|
[] |
dirkgr
| 2
|
graphdeco-inria/gaussian-splatting
|
computer-vision
| 919
|
Install SIBR_viewers on windows failed
|
Has anyone tried building SIBR_viewers with cmake on Windows?
I encountered a problem in cmake, which should be caused by opencv, but I still couldn't solve it.
and the error message is as follows:
```
-- Found a 3rdParty OPENCV dir : D:/project/studytwo/gaussian-splatting/SIBR_viewers/extlibs/opencv.
-- OpenCV ARCH: x64
-- OpenCV RUNTIME:
-- OpenCV STATIC: OFF
CMake Warning at extlibs/opencv/install/OpenCVConfig.cmake:190 (message):
Found OpenCV Windows Pack but it has no binaries compatible with your
configuration.
You should manually point CMake variable OpenCV_DIR to your build of OpenCV
library.
Call Stack (most recent call first):
cmake/windows/dependencies.cmake:233 (find_package)
cmake/windows/include_once.cmake:20 (include)
src/CMakeLists.txt:46 (include_once)
CMake Error at cmake/windows/dependencies.cmake:233 (find_package):
Found package configuration file:
D:/project/studytwo/gaussian-splatting/SIBR_viewers/extlibs/opencv/install/OpenCVConfig.cmake
but it set OpenCV_FOUND to FALSE so package "OpenCV" is considered to be
NOT FOUND.
Call Stack (most recent call first):
cmake/windows/include_once.cmake:20 (include)
src/CMakeLists.txt:46 (include_once)
-- Configuring incomplete, errors occurred!
```
Does anyone know how to solve this? Thank you so much!
|
open
|
2024-08-02T14:36:09Z
|
2024-09-03T08:23:45Z
|
https://github.com/graphdeco-inria/gaussian-splatting/issues/919
|
[] |
xuncpp
| 2
|
plotly/dash-bio
|
dash
| 534
|
use _imports_, not load_components
|
`__init__.py` is still using `_dash.development.component_loader.load_components` - this has long been obsoleted by the build process creating `_imports_.py`. See eg [dcc's `__init__.py`](https://github.com/plotly/dash-core-components/blob/dev/dash_core_components_base/__init__.py)
|
closed
|
2020-12-10T16:33:06Z
|
2021-06-24T17:01:36Z
|
https://github.com/plotly/dash-bio/issues/534
|
[] |
alexcjohnson
| 0
|
biolab/orange3
|
numpy
| 6,163
|
Changed code signature in macOS version 3.33.0?
|
I've received a warning that the code signature has changed in download since 3.32.0
https://download.biolab.si/download/files/Orange3-3.33.0-Python3.8.8.dmg
```
EXPECTED
Univerza v Ljubljani, Fakulteta za racunalnistvo (556DY3FJ29)
but FOUND
Revelo d.o.o. (AX63C2T8RC)
```
can you confirm this?
**How can we reproduce the problem?**
```
codesign -dv --verbose=4 path/to/app/Orange3.app
Authority=Developer ID Application: Revelo d.o.o. (AX63C2T8RC)
```
**What's your environment?**
macOS 12.6 / intel
```
CFBundleIdentifier: si.biolab.orange
CFBundleShortVersionString: 3.33.0
CFBundleVersion: 3.33.0
```
|
closed
|
2022-10-06T16:01:27Z
|
2022-10-06T17:43:04Z
|
https://github.com/biolab/orange3/issues/6163
|
[
"bug report"
] |
suschizu
| 2
|
chaos-genius/chaos_genius
|
data-visualization
| 637
|
Handle URLs and metadata in alert messages
|
Handle URLs and and alert UI metadata in alert messages on Slack and email.
|
closed
|
2022-02-03T03:37:11Z
|
2022-02-16T18:07:10Z
|
https://github.com/chaos-genius/chaos_genius/issues/637
|
[
"🛠️ backend",
"P1"
] |
suranah
| 0
|
iterative/dvc
|
data-science
| 10,319
|
`dvc.api.open`: broken with `no_scm`
|
# Bug Report
## Description
`dvc.api.open()` doesn't work when `core.no_scm = true`.
### Reproduce
Script:
```
DIR=$(mktemp -d)
cd $DIR
dvc init -q --no-scm
echo foo > foo
dvc add foo
rm foo
python -c "import dvc.api
with dvc.api.open('.', 'foo') as f:
print(f.read())"
```
Output:
```
100% Adding...|█████████████████████████████████████████████████|1/1 [00:00, 46.62file/s]
Traceback (most recent call last):
File "/Users/dave/micromamba/envs/dvc/lib/python3.11/site-packages/scmrepo/git/backend/dulwich/__init__.py", line 260, in clone
repo = clone_from()
^^^^^^^^^^^^
File "/Users/dave/micromamba/envs/dvc/lib/python3.11/site-packages/dulwich/porcelain.py", line 546, in clone
return client.clone(
^^^^^^^^^^^^^
File "/Users/dave/micromamba/envs/dvc/lib/python3.11/site-packages/dulwich/client.py", line 753, in clone
result = self.fetch(path, target, progress=progress, depth=depth)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dave/micromamba/envs/dvc/lib/python3.11/site-packages/dulwich/client.py", line 1510, in fetch
with self._open_repo(path) as r:
^^^^^^^^^^^^^^^^^^^^^
File "/Users/dave/micromamba/envs/dvc/lib/python3.11/site-packages/dulwich/client.py", line 1432, in _open_repo
return closing(Repo(path))
^^^^^^^^^^
File "/Users/dave/micromamba/envs/dvc/lib/python3.11/site-packages/dulwich/repo.py", line 1155, in __init__
raise NotGitRepository(
dulwich.errors.NotGitRepository: No git repository was found at foo
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/dave/Code/dvc/dvc/scm.py", line 152, in clone
git = Git.clone(url, to_path, progress=pbar.update_git, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dave/micromamba/envs/dvc/lib/python3.11/site-packages/scmrepo/git/__init__.py", line 154, in clone
backend.clone(url, to_path, bare=bare, mirror=mirror, **kwargs)
File "/Users/dave/micromamba/envs/dvc/lib/python3.11/site-packages/scmrepo/git/backend/dulwich/__init__.py", line 268, in clone
raise CloneError(url, to_path) from exc
scmrepo.exceptions.CloneError: Failed to clone repo 'foo' to '/var/folders/24/99_tf1xj3vx8k1k_jkdmnhq00000gn/T/tmpnorzctwbdvc-clone'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "/Users/dave/micromamba/envs/dvc/lib/python3.11/contextlib.py", line 137, in __enter__
return next(self.gen)
^^^^^^^^^^^^^^
File "/Users/dave/Code/dvc/dvc/api/data.py", line 240, in _open
with Repo.open(repo, rev=rev, **repo_kwargs) as _repo:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dave/Code/dvc/dvc/repo/__init__.py", line 295, in open
return open_repo(url, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dave/Code/dvc/dvc/repo/open_repo.py", line 60, in open_repo
return _external_repo(url, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dave/micromamba/envs/dvc/lib/python3.11/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/Users/dave/Code/dvc/dvc/repo/open_repo.py", line 23, in _external_repo
path = _cached_clone(url, rev)
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dave/Code/dvc/dvc/repo/open_repo.py", line 134, in _cached_clone
clone_path, shallow = _clone_default_branch(url, rev)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dave/micromamba/envs/dvc/lib/python3.11/site-packages/funcy/decorators.py", line 47, in wrapper
return deco(call, *dargs, **dkwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dave/micromamba/envs/dvc/lib/python3.11/site-packages/funcy/flow.py", line 246, in wrap_with
return call()
^^^^^^
File "/Users/dave/micromamba/envs/dvc/lib/python3.11/site-packages/funcy/decorators.py", line 68, in __call__
return self._func(*self._args, **self._kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dave/Code/dvc/dvc/repo/open_repo.py", line 198, in _clone_default_branch
git = clone(url, clone_path)
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dave/Code/dvc/dvc/scm.py", line 157, in clone
raise CloneError("SCM error") from exc
dvc.scm.CloneError: SCM error
```
### Expected
Script should print `foo`.
### Environment information
**Output of `dvc doctor`:**
```console
$ dvc doctor
DVC version: 3.47.1.dev2+g3309e1562
-----------------------------------
Platform: Python 3.11.7 on macOS-14.3.1-arm64-arm-64bit
Subprojects:
dvc_data = 3.13.0
dvc_objects = 5.0.0
dvc_render = 1.0.1
dvc_task = 0.3.0
scmrepo = 3.1.0
Supports:
azure (adlfs = 2023.12.0, knack = 0.11.0, azure-identity = 1.15.0),
gdrive (pydrive2 = 1.19.0),
gs (gcsfs = 2024.2.0),
hdfs (fsspec = 2024.2.0, pyarrow = 14.0.2),
http (aiohttp = 3.9.1, aiohttp-retry = 2.8.3),
https (aiohttp = 3.9.1, aiohttp-retry = 2.8.3),
oss (ossfs = 2023.12.0),
s3 (s3fs = 2024.2.0, boto3 = 1.33.13),
ssh (sshfs = 2023.10.0),
webdav (webdav4 = 0.9.8),
webdavs (webdav4 = 0.9.8),
webhdfs (fsspec = 2024.2.0)
Config:
Global: /Users/dave/Library/Application Support/dvc
System: /Library/Application Support/dvc
```
|
closed
|
2024-02-26T13:31:01Z
|
2024-02-26T15:26:40Z
|
https://github.com/iterative/dvc/issues/10319
|
[
"bug",
"A: api"
] |
dberenbaum
| 2
|
CorentinJ/Real-Time-Voice-Cloning
|
deep-learning
| 845
|
Zsh: parse error near '\n'
|
OS: Kali Linux
When I try to open the ''program'' with the command python demo_toolbox.py -d <datasets_root>
It says this error:

|
closed
|
2021-09-10T19:03:45Z
|
2022-12-09T06:01:53Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/845
|
[] |
Arnau97
| 3
|
Lightning-AI/LitServe
|
rest-api
| 45
|
Enable batching with streaming
|
Also please add open an issue for batching+streaming as part of addressing the review
_Originally posted by @lantiga in https://github.com/Lightning-AI/litserve/pull/37#discussion_r1566469903_
|
closed
|
2024-04-15T22:08:05Z
|
2024-04-29T14:07:51Z
|
https://github.com/Lightning-AI/LitServe/issues/45
|
[
"enhancement"
] |
aniketmaurya
| 0
|
benbusby/whoogle-search
|
flask
| 882
|
[BUG] Can't add search engine on Firefox
|
**Describe the bug**
Related to this: https://github.com/benbusby/whoogle-search/issues/147
On the latest Firefox version on Windows and latest Whoogle version (0.7.4), trying to add whoogle as a search engine shows me

I tried enabling GET requests only as mentioned in another ticket I found but no luck.
A regression maybe?
**Deployment Method**
- [ ] Heroku (one-click deploy)
- [x] Docker
- [ ] `run` executable
- [ ] pip/pipx
- [ ] Other: [describe setup]
**Version of Whoogle Search**
- [ ] Latest build from [source] (i.e. GitHub, Docker Hub, pip, etc)
- [x] Version 0.7.4
- [ ] Not sure
**Desktop (please complete the following information):**
- OS: Windows 11
- Browser: Firefox
- Version: 106.0.5 (64-bit)
|
closed
|
2022-11-11T14:09:14Z
|
2022-11-11T14:15:22Z
|
https://github.com/benbusby/whoogle-search/issues/882
|
[
"bug"
] |
savvasdalkitsis
| 2
|
ultralytics/ultralytics
|
python
| 18,881
|
Where run summary and run history is created/can be changed?
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hey,
I added succesfully the metrics `mAP70` to my training and wonder about the sequence in the final history and summary.
Where (script) the history and summary is created?
When my key series is correct.
```python
@property
def keys(self):
"""Returns a list of keys for accessing specific metrics."""
#return ["metrics/precision(B)", "metrics/recall(B)", "metrics/mAP50(B)", "metrics/mAP50-95(B)"] # default
default_keys = ["metrics/precision(B)", "metrics/recall(B)", "metrics/mAP50(B)", "metrics/mAP70(B)", "metrics/mAP50-95(B)"] # default metrics for training job
return default_keys
```
I expected that mAP70 is right after mAP50:
<div class="wandb-row">
<div class="wandb-col">
<h3>Run history:</h3>
<table class="wandb">
<tr><td>lr/pg0</td><td>█▂▁▁▁</td></tr>
<tr><td>lr/pg1</td><td>▄█▆▄▁</td></tr>
<tr><td>lr/pg2</td><td>▄█▆▄▁</td></tr>
<tr><td>metrics/mAP50(B)</td><td>▁▄▆▇█</td></tr>
<tr><td>metrics/mAP50-95(B)</td><td>▁▄▆▇█</td></tr>
<tr><td>metrics/mAP70(B)</td><td>▁▄▇▇█</td></tr>
<tr><td>metrics/precision(B)</td><td>▁█▂▃▃</td></tr>
<tr><td>metrics/recall(B)</td><td>▁▃▅▇█</td></tr>
<tr><td>model/GFLOPs</td><td>▁</td></tr>
<tr><td>model/parameters</td><td>▁</td></tr>
<tr><td>model/speed_PyTorch(ms)</td><td>▁</td></tr>
<tr><td>train/box_loss</td><td>█▇▄▂▁</td></tr>
<tr><td>train/cls_loss</td><td>█▅▃▂▁</td></tr>
<tr><td>train/dfl_loss</td><td>█▅▃▂▁</td></tr>
<tr><td>val/box_loss</td><td>█▆▃▂▁</td></tr>
<tr><td>val/cls_loss</td><td>█▄▂▁▁</td></tr>
<tr><td>val/dfl_loss</td><td>█▅▃▁▁</td></tr>
</table>
</div>
<div class="wandb-col">
<h3>Run summary:</h3>
<table class="wandb">
<tr><td>lr/pg0</td><td>1e-05</td></tr>
<tr><td>lr/pg1</td><td>1e-05</td></tr>
<tr><td>lr/pg2</td><td>1e-05</td></tr>
<tr><td>metrics/mAP50(B)</td><td>0.23647</td></tr>
<tr><td>metrics/mAP50-95(B)</td><td>0.13146</td></tr>
<tr><td>metrics/mAP70(B)</td><td>0.16201</td></tr>
<tr><td>metrics/precision(B)</td><td>0.29527</td></tr>
<tr><td>metrics/recall(B)</td><td>0.27963</td></tr>
<tr><td>model/GFLOPs</td><td>29.639</td></tr>
<tr><td>model/parameters</td><td>11423327</td></tr>
<tr><td>model/speed_PyTorch(ms)</td><td>3.011</td></tr>
<tr><td>train/box_loss</td><td>1.89892</td></tr>
<tr><td>train/cls_loss</td><td>4.6896</td></tr>
<tr><td>train/dfl_loss</td><td>2.43468</td></tr>
<tr><td>val/box_loss</td><td>1.65862</td></tr>
<tr><td>val/cls_loss</td><td>4.63246</td></tr>
<tr><td>val/dfl_loss</td><td>2.57379</td></tr>
</table>
</div>
</div>
### Additional
_No response_
|
closed
|
2025-01-25T14:19:50Z
|
2025-01-27T10:59:23Z
|
https://github.com/ultralytics/ultralytics/issues/18881
|
[
"question"
] |
Petros626
| 6
|
MagicStack/asyncpg
|
asyncio
| 697
|
How i can save query order in response list
|
<!--
Thank you for reporting an issue/feature request.
If this is a feature request, please disregard this template. If this is
a bug report, please answer to the questions below.
It will be much easier for us to fix the issue if a test case that reproduces
the problem is provided, with clear instructions on how to run it.
Thank you!
-->
* **asyncpg version**: 0.21.0
* **PostgreSQL version**: 12.5
* **Python version**: 3.9.1
* **Platform**: Ubuntu
* **Do you use pgbouncer?**: No
* **Did you install asyncpg with pip?**: yes
<!-- Enter your issue details below this comment. -->
When I using asyncpg with "ORDER BY" statement inside query I have the wrong sort result.
```sql
SELECT name
FROM repository_list
WHERE service_instance = $1 AND registry_id = $5
ORDER BY $2
LIMIT $3
OFFSET $4
```
```python
await request.state.db.fetch(
f"""
SELECT name
FROM repository_list
WHERE service_instance = $1 AND registry_id = $2
ORDER BY $5
LIMIT $3
OFFSET $4
""",
core.service_instance_pk,
_list_arguments.get("registry_id"),
list_arguments.limit,
list_arguments.offset,
sort_field
)
```
In console response:
```
123
123_test
alpine
long-name-repository-harbor-check-interface
test-test-test
```
Response in code:
```
123_test
test-test-test
long-name-repository-harbor-check-interface
alpine
123
```
But when I use this syntax, everything fine. But not safe.
```sql
SELECT name
FROM repository_list
WHERE service_instance = $1 AND registry_id = $2
ORDER BY {sort_field}
LIMIT $3
OFFSET $4
```
```python
await request.state.db.fetch(
f"""
SELECT *
FROM repository_list
WHERE service_instance = $1 AND registry_id = $2
ORDER BY {sort_field} {order}
LIMIT $3
OFFSET $4
""",
core.service_instance_pk,
_list_arguments.get("registry_id"),
list_arguments.limit,
list_arguments.offset
)
```
|
closed
|
2021-01-27T13:00:28Z
|
2021-02-10T05:55:47Z
|
https://github.com/MagicStack/asyncpg/issues/697
|
[] |
p141592
| 1
|
mwaskom/seaborn
|
matplotlib
| 3,568
|
Feature request: allow seaborn to annotate a subplot with 0 counts, when there are 0 counts for all bars in a subplot
|
Hi, first up, wanna say to the developer, thanks for developing seaborn! It has been a great tool for me to visualise data =). Would like to ask if its possible to add the ability for seaborn, to annotate a subplot with 0 counts, when there are 0 counts for all bars in a subplot? A more detailed explanation is below.
I have code using seaborn `catplot`, to draw categorical plots onto a FacetGrid. I am using a `countplot` in the `catplot` function, hence am using `kind='count'`. The `col` argument in the `catplot` is set to the `col_cat` variable, which in this context is defined as `age_category`. `age_category` is a column in my `df`, which as its name suggests, represents age categories. This is an ordered pandas categorical dtype.
My `df` is as follows:
```
ipdb> df
spirometryResult_category age_category habits-smoking
_id
63bb97708e5f58ef85f6e4ea Normal 20-39 years old Yes
63bd1b228e5f58ef85f73130 Normal 20-39 years old Yes
6423cb1c174e67af0aa0f0fc Normal 20-39 years old No
6423d85e174e67af0aa10cda Restrictive 20-39 years old No
6423d8bb174e67af0aa10d98 Obstructive 20-39 years old No
... ... ... ...
6549a0df0941d048fdfd94c4 Obstructive 20-39 years old No
6549d0ab0941d048fdfd960d Normal 40-59 years old No
6549d0ee0941d048fdfd962b Normal 20-39 years old No
654b17a20941d048fdfda256 Normal 20-39 years old No
654d81700941d048fdfdc27d Normal 40-59 years old No
[106 rows x 3 columns]
```
The `age_category` column in `df` is as follows:
```
ipdb> df['age_category']
_id
63bb97708e5f58ef85f6e4ea 20-39 years old
63bd1b228e5f58ef85f73130 20-39 years old
6423cb1c174e67af0aa0f0fc 20-39 years old
6423d85e174e67af0aa10cda 20-39 years old
6423d8bb174e67af0aa10d98 20-39 years old
...
6549a0df0941d048fdfd94c4 20-39 years old
6549d0ab0941d048fdfd960d 40-59 years old
6549d0ee0941d048fdfd962b 20-39 years old
654b17a20941d048fdfda256 20-39 years old
654d81700941d048fdfdc27d 40-59 years old
Name: age_category, Length: 106, dtype: category
Categories (4, object): ['20-39 years old' < '40-59 years old' < '60-79 years old' < '>= 80 years old']
```
The distribution of categories in the `age_category` column is as follows:
```
ipdb> df['age_category'].value_counts()
age_category
20-39 years old 89
40-59 years old 14
60-79 years old 3
>= 80 years old 0
Name: count, dtype: int64
```
The number of subjects in the age category of '>= 80 years old' is 0, which gives me problems in plotting its annotations for the bars.
In general, the code which is below works. My objective is to plot multiple subplots, one for each age category, showing the subject counts for each combination of `spirometryResult_category` and `habits-smoking`.
```
# Getting colours as specified in the config, for each hue category
# Need to remove this hardcoding when i improve script
colour_map = config['seaborn_colourmaps'][hue_cat]
# Plotting graph
# count refers to param_category counts
plt.subplots(figsize=figsize)
# Not sure why setting axes.labelsize here doesnt
# work
sns.set_context('paper', rc={'font.size':fontsize})
# height=4, aspect=.6,
g = sns.catplot(
data=df, x=param_category, hue=hue_cat, col=col_cat,
kind='count', palette=colour_map, col_wrap=wrap_num,
saturation=1
)
for ax in g.axes:
ax.tick_params(left=False, labelbottom=True)
ax.set_xticklabels(ax.get_xticklabels(), size=fontsize)
# Replacing subplot title if needed
if col_cat in config['seaborn_alt_names']:
new_title = config['seaborn_alt_names'][col_cat]
ax.set_title( ax.get_title().replace(col_cat, new_title), size=fontsize)
# Auto-label bars
for container in ax.containers:
container.datavalues = np.nan_to_num(container.datavalues)
ax.bar_label(container, fmt='%.0f', padding=2)
# In contrast to prev plotting code, despine goes here, as facetgrid
# requires it to be done this way
g.despine(top=True, right=True, left=True)
# Fine adjustment of aesthetics
g.set(yticklabels=[], ylabel=None, xlabel=None)
g.tick_params('x', rotation=90)
# Checking if legend title is needed
legend = False
if 'legend' in plot_info:
legend = plot_info['legend']
if not legend:
g.get_legend().set_title(None)
else:
# If an alternative legend title is specified,
# use that, if not, use the default one
if hue_cat in config['seaborn_alt_names']:
new_title = config['seaborn_alt_names'][hue_cat]
g.legend.set_title(new_title)
# Continuing adjustment of aesthetics
plt.subplots_adjust(hspace=1, wspace=0.3)
g.figure.savefig(filename, bbox_inches='tight')
plt.close()
```
The output picture is show here:

As you can see, the category of ">= 80 years old" has no subjects, hence for its corresponding subplots, the text "0" is not plotted at all. All other age categories have their corresponding bars and annotations created correctly. For this case, where ">= 80 years old" has no subjects, `ax.containers` is an empty list, therefore my for loop using` for container in ax.containers: `to annotate cases with 0 counts, does not work.
How do I force seaborn to annotate subplots with 0 counts, in the correct location (automatically decided by seaborn so i dont have to hardcode anything), in this case, where the category has 0 subjects, and `ax.containers` is an empty list? Seaborn doesn't seem to allow me to do that, so would it be possible to add this in please?
|
closed
|
2023-11-22T08:33:20Z
|
2023-11-24T01:59:42Z
|
https://github.com/mwaskom/seaborn/issues/3568
|
[] |
jonng1000
| 9
|
polyaxon/traceml
|
matplotlib
| 8
|
Show missing columns only
|
Thanks for the awesome plugin !
1. possible to add colors to point out missing values. Light shade of red if missing is > 0.
2. possible to display only missing columns? Sometimes a dataframe has a lot of columns and user is mostly interested in missing information.
I am new to python, if you guide me where to look, I can create a pull request. Thank you.
|
open
|
2017-04-24T18:50:35Z
|
2017-04-24T18:50:35Z
|
https://github.com/polyaxon/traceml/issues/8
|
[] |
upkarlidder
| 0
|
dmlc/gluon-nlp
|
numpy
| 1,221
|
albert model requested!
|
## Description
albert model size is pretty small and interesting.
A mxnet based pretraining implementation should be helpful to alot of people!
## References
|
closed
|
2020-05-05T14:07:34Z
|
2020-07-19T21:49:27Z
|
https://github.com/dmlc/gluon-nlp/issues/1221
|
[
"enhancement"
] |
timespaceuniverse
| 2
|
gevent/gevent
|
asyncio
| 1,097
|
Sleep in a threading.Timer freezes execution
|
* gevent version: 1.2.2
* Python version: 3.5.3 anaconda
* Operating System: Windows 10 Home, version 1709
### Description:
I've been trying to call time.sleep from within threading.Timer from within a gevent Pool process, but execution halts without ever going through time.sleep.
A side note, but if you set threading.Timer to execute after more than 0.0 seconds, it halts without ever running the function behind the timer.
This issue does not exist when using threading.Thread in stead of threading.Timer
Expected behavior: process sleeps for given time and continues execution.
### What I've run:
```python
from gevent import monkey
from gevent.pool import Pool
import threading
import time
monkey.patch_all()
def wrapped():
def example():
print("sleeping")
time.sleep(1)
print("end sleeping")
t = threading.Timer(0.0, example)
print("threading timer start")
t.start()
t.join()
pool = Pool()
pool.spawn(wrapped)
pool.join()
```
|
closed
|
2018-02-13T15:29:38Z
|
2018-02-17T15:11:25Z
|
https://github.com/gevent/gevent/issues/1097
|
[] |
kamac
| 1
|
littlecodersh/ItChat
|
api
| 542
|
装饰器将用户自定义接口放进字典
|
https://github.com/littlecodersh/ItChat/blob/bbcb8173b611137a6fd6ac8a4d0a96cb8892fbd6/itchat/components/register.py#L72
刚学Python,,不知道这段代码是在何时执行的,
python的装饰器难倒是写上去之后,就会自动被调用,然后把用户自定义的这些函数 放进对应的字典,
不是很清楚,请指教
|
closed
|
2017-11-01T08:58:20Z
|
2017-11-01T09:34:10Z
|
https://github.com/littlecodersh/ItChat/issues/542
|
[] |
fqdeng
| 1
|
ranaroussi/yfinance
|
pandas
| 1,598
|
Ticker.info data bug
|
After the latest update to fix the 401 error, the function Ticker.Info dont return the same data as before, in my app i use this to get the LongName of the ticker, but now this value doesnt exist anymore.
Print of return data for the ticker ^BVSP:

|
closed
|
2023-07-14T11:45:14Z
|
2023-07-14T16:52:17Z
|
https://github.com/ranaroussi/yfinance/issues/1598
|
[] |
WellyngtonF
| 4
|
jupyterlab/jupyter-ai
|
jupyter
| 1,184
|
After a cell's code execution fails, click the error analysis button to perform the analysis.
|
Hello, I was previously working on configuring a custom model for jupyter-ai and came across an issue where there was a feature mentioned. This feature allows a button to pop up when code in a notebook cell encounters an error, and clicking the button would enable error analysis. However, I didn't save the issue link and now I can't find it. I'm wondering if there are any related discussions in the jupyter-ai issues? If so, could you please let me know how to configure this feature locally? Thank you!

|
open
|
2025-01-06T06:54:53Z
|
2025-01-08T19:02:17Z
|
https://github.com/jupyterlab/jupyter-ai/issues/1184
|
[
"enhancement"
] |
msyJY
| 9
|
PaddlePaddle/PaddleHub
|
nlp
| 2,333
|
小白请教ernie_skep_sentiment_analysis怎么指定使用那张显卡?
|
我有两张显卡,我在学习ernie_skep_sentiment_analysis预测代码示例:
import paddlehub as hub
module = hub.Module(name="ernie_skep_sentiment_analysis")
test_texts = ['你不是不聪明,而是不认真', '虽然小明很努力,但是他还是没有考100分']
results = module.predict_sentiment(test_texts, use_gpu=False)
for result in results:
print(result['text'])
print(result['sentiment_label'])
print(result['positive_probs'])
print(result['negative_probs'])
我想在代码示例中加入显卡指定,好像不能通过os.environ["CUDA_VISIBLE_DEVICES"]="1"来指定运行显卡。
另外我也尝试修改了model.py 中的
_places = os.environ["CUDA_VISIBLE_DEVICES"]
int(_places[0])
也不能指定显卡。
请问要怎么做才能够在预测代码示例中加入指定显卡?
|
open
|
2024-07-04T07:36:34Z
|
2024-07-04T07:37:05Z
|
https://github.com/PaddlePaddle/PaddleHub/issues/2333
|
[] |
thehzzz
| 0
|
gradio-app/gradio
|
data-visualization
| 10,712
|
No module named `gradio._simple_templates`
|
### Describe the bug
When launching my app, I get the following error. However, this happens only in a specific environment as described in the system info below.
```shell
Traceback (most recent call last):
File "/root/app/package/webapp.py", line 11, in <module>
import gradio as gr
File "/usr/local/lib/python3.12/site-packages/gradio/__init__.py", line 3, in <module>
import gradio._simple_templates
ModuleNotFoundError: No module named 'gradio._simple_template'
```
What am I missing?
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
```
is enough.
### Screenshot
_No response_
### Logs
```shell
Traceback (most recent call last):
File "/root/app/package/webapp.py", line 11, in <module>
import gradio as gr
File "/usr/local/lib/python3.12/site-packages/gradio/__init__.py", line 3, in <module>
import gradio._simple_templates
ModuleNotFoundError: No module named 'gradio._simple_template'
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 5.20.0
gradio_client version: 1.7.2
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.8.0
audioop-lts is not installed.
fastapi: 0.115.11
ffmpy: 0.5.0
gradio-client==1.7.2 is not installed.
groovy: 0.1.2
httpx: 0.28.1
huggingface-hub: 0.29.1
jinja2: 3.1.5
markupsafe: 2.1.5
numpy: 2.0.2
orjson: 3.10.15
packaging: 24.2
pandas: 2.2.2
pillow: 11.1.0
pydantic: 2.10.6
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.2
ruff: 0.9.9
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.46.0
tomlkit: 0.13.2
typer: 0.15.1
typing-extensions: 4.12.2
urllib3: 2.3.0
uvicorn: 0.34.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2025.2.0
httpx: 0.28.1
huggingface-hub: 0.29.1
packaging: 24.2
typing-extensions: 4.12.2
websockets: 15.0
```
### Severity
Blocking usage of gradio
|
closed
|
2025-03-03T08:24:10Z
|
2025-03-06T02:56:47Z
|
https://github.com/gradio-app/gradio/issues/10712
|
[
"bug",
"pending clarification"
] |
anirbanbasu
| 3
|
aiortc/aiortc
|
asyncio
| 1,178
|
Creating data channel makes the connection fail
|
I have a webrtc client made using aiortc in python and it is connecting to a webrtc server using rust ([webrtc-rs](https://github.com/webrtc-rs/webrtc))
currently I am creating a data channel and adding a transceiver to receive video like so
```
pc.createDataChannel("dc")
pc.addTransceiver("video", "recvonly")
offer = await pc.createOffer()
await pc.setLocalDescription(offer)
```
on the server side I get this message
```
2024-10-23T15:44:56.339948Z WARN webrtc::peer_connection::peer_connection_internal: undeclared_media_processor failed to open SrtpSession
2024-10-23T15:44:56.340313Z WARN webrtc::peer_connection::peer_connection_internal: Failed to start SCTP: DTLS not established
2024-10-23T15:44:56.340345Z WARN webrtc::peer_connection::peer_connection_internal: undeclared_media_processor failed to open SrtcpSession
```
and the connection fails
However if I comment the line `pc.createDataChannel("dc")`
it connects fine and receives the video correctly
I'm not sure what could be causing this
I tested if the issue was on the rust side but it works fine if instead of connecting with aiortc I use javascript
|
closed
|
2024-10-23T15:47:12Z
|
2025-02-01T16:06:04Z
|
https://github.com/aiortc/aiortc/issues/1178
|
[] |
nunoribeiro-tw
| 1
|
giotto-ai/giotto-tda
|
scikit-learn
| 9
|
bind to binder
|
The examples shall be run from binder.
|
closed
|
2019-10-15T14:00:50Z
|
2019-10-18T07:27:06Z
|
https://github.com/giotto-ai/giotto-tda/issues/9
|
[] |
matteocao
| 0
|
WZMIAOMIAO/deep-learning-for-image-processing
|
pytorch
| 834
|
value cannot be converted to type int without overflow
|
Traceback (most recent call last):
File "train.py", line 216, in <module>
main(args)
File "train.py", line 154, in main
confmat = evaluate(model, val_loader, device=device, num_classes=num_classes)
File "/home/ma-user/work/SEG/deep-learning-for-image-processing/pytorch_segmentation/deeplab_v3/train_utils/train_and_eval.py", line 29, in evaluate
confmat.update(target.flatten(), output.argmax(1).flatten())
File "/home/ma-user/work/SEG/deep-learning-for-image-processing/pytorch_segmentation/deeplab_v3/train_utils/distributed_utils.py", line 88, in update
self.mat += torch.bincount(inds, minlength=n**2).reshape(n, n)
RuntimeError: value cannot be converted to type int without overflow
|
open
|
2024-09-20T13:37:58Z
|
2024-09-20T13:37:58Z
|
https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/834
|
[] |
gosling123456
| 0
|
jazzband/django-oauth-toolkit
|
django
| 1,143
|
Help wanted: reviewers, additional project lead(s)
|
# Help Wanted
We need help maintaining and enhancing django-oauth-toolkit (DOT).
## Join the team
Please consider joining [Jazzband](https://jazzband.co) (If not already a member) and the [DOT project team](https://jazzband.co/projects/django-oauth-toolkit).
## How you can help
See our [contributing](https://django-oauth-toolkit.readthedocs.io/en/latest/contributing.html) info and the open [issues](https://github.com/jazzband/django-oauth-toolkit/issues) and [PRs](https://github.com/jazzband/django-oauth-toolkit/pulls), especially those labeled [help-wanted](https://github.com/jazzband/django-oauth-toolkit/labels/help-wanted).
## Submit PRs and perform Reviews
PR submissions and reviews are always appreciated! Since we require an independent review of any PR before it can be merged, having your second set of eyes looking at PRs is extremely valuable.
## Please don't merge PRs
Please be aware that we don't want _every_ Jazzband member to merge PRs but just a handful of project team members so that we can maintain a modicum of control over what goes into a release. Only [project leads](https://jazzband.co/projects/django-oauth-toolkit) are able to publish releases to Pypi and it becomes difficult when creating a new release for the leads to deal with "unexpected" merged PRs.
## Become a Project Lead
If you are interested in stepping up to be a Project Lead, please join the [discussion](https://github.com/orgs/jazzband/teams/django-oauth-toolkit).
|
open
|
2022-04-24T17:27:51Z
|
2022-04-24T17:27:51Z
|
https://github.com/jazzband/django-oauth-toolkit/issues/1143
|
[
"help-wanted"
] |
n2ygk
| 0
|
wkentaro/labelme
|
computer-vision
| 746
|
[Issue] Need polygons to be darker to completely mask the object
|
can the polygons in Labelme be made darker? So, it completely masks the object. A little help will be appreciated. Thanks
|
closed
|
2020-08-09T07:18:44Z
|
2022-06-25T04:58:46Z
|
https://github.com/wkentaro/labelme/issues/746
|
[] |
nkshelby
| 0
|
mithi/hexapod-robot-simulator
|
plotly
| 95
|
Hexapod should twist when all femur joints are on the ground
|
<img width="1280" alt="Screen Shot 2020-05-29 at 2 56 48 AM" src="https://user-images.githubusercontent.com/1670421/83181979-382eb300-a158-11ea-9848-07afb640ed91.png">
|
closed
|
2020-05-28T18:58:24Z
|
2020-06-22T21:00:20Z
|
https://github.com/mithi/hexapod-robot-simulator/issues/95
|
[
"bug",
"wontfix"
] |
mithi
| 1
|
keras-team/keras
|
machine-learning
| 20,964
|
User guide to use Rematerialization
|
gradient checkpoint feat was introduced here https://github.com/keras-team/keras/pull/20743 but currently there is no user guide how to adop it or use it.
|
open
|
2025-02-26T01:06:43Z
|
2025-02-27T17:11:10Z
|
https://github.com/keras-team/keras/issues/20964
|
[
"type:docs"
] |
pure-rgb
| 2
|
strawberry-graphql/strawberry
|
django
| 2,986
|
Option for IsAuthenticated permission to `fail_silently` and still include errors in the response
|
## Feature Request Type
- [ ] Core functionality
- [x] Alteration (enhancement/optimization) of existing feature(s)
- [ ] New behavior
## Description
When using `IsAuthenticated` from `strawberry_django.permissions`, with the option `fail_silently=True` it would be nice for one to be able to provide an option like `show_errors=True` so that on top of returning None for a nullable field or an empty list for a list field, errors can also be displayed on the Response.
This would be useful since when one is not authenticated or has no permission, the Response can include errors when using `fail_silently=True`
Or how can one override error handling to prevent traceback logging on the terminal for an anticipated error raised in code like failed permissions, while still displaying the error in the response?
<!-- POLAR PLEDGE BADGE START -->
## Upvote & Fund
- We're using [Polar.sh](https://polar.sh/strawberry-graphql) so you can upvote and help fund this issue.
- We receive the funding once the issue is completed & confirmed by you.
- Thank you in advance for helping prioritize & fund our backlog.
<a href="https://polar.sh/strawberry-graphql/strawberry-graphql-django/issues/322">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/strawberry-graphql/strawberry-graphql-django/issues/322/pledge.svg?darkmode=1">
<img alt="Fund with Polar" src="https://polar.sh/api/github/strawberry-graphql/strawberry-graphql-django/issues/322/pledge.svg">
</picture>
</a>
<!-- POLAR PLEDGE BADGE END -->
|
open
|
2023-07-27T12:15:46Z
|
2024-01-17T14:00:30Z
|
https://github.com/strawberry-graphql/strawberry/issues/2986
|
[] |
K-Kelvin
| 0
|
NVIDIA/pix2pixHD
|
computer-vision
| 331
|
.
|
open
|
2024-01-23T13:19:37Z
|
2024-01-28T04:20:09Z
|
https://github.com/NVIDIA/pix2pixHD/issues/331
|
[] |
merajose
| 0
|
|
httpie/cli
|
rest-api
| 1,454
|
Feature request: pre-request script and post-request script
|
## Checklist
- [x] I've searched for similar feature requests.
---
## Enhancement request
Please add pre and post request script.
---
## Problem it solves
Sometimes when working with secure API that needs to create a request signature using complicated stuff (e.g hashing the host, request url, request body, etc) it's tiresome to manually create it and then add it to request header.
Having access to pre-request script, generate the signature, and use the generated signature to a certain header will be useful.
Post-request script: also on the case of secure API, we may need to get a token and that token now needs to be included on every request on certain header. or maybe we've hit a login endpoint and now want to store the JWT token from the response to a variable
---
## Additional information, screenshots, or code examples
Similar to pre-request script in and test-script (post-request script) in postman
|
open
|
2022-12-08T13:22:28Z
|
2025-03-19T10:20:08Z
|
https://github.com/httpie/cli/issues/1454
|
[
"enhancement",
"new"
] |
kamalhm
| 12
|
dgtlmoon/changedetection.io
|
web-scraping
| 2,930
|
[feature] (UI) Rearrange order of watch group tabs.
|
**Version and OS**
0.49.0 on termux
** Description:**
It would be useful to have ability to rearrange order of watch groups. Currently they go: "All" "AGroup" "BGroup" "CGroup" (in alphabetical order), but what if "CGroup" is most important one for user?, he would like to have it as "All" "CGroup" "AGroup" "BGroup" but currently can't.
|
open
|
2025-01-26T10:33:11Z
|
2025-03-22T21:10:35Z
|
https://github.com/dgtlmoon/changedetection.io/issues/2930
|
[
"enhancement",
"user-interface"
] |
gety9
| 1
|
keras-team/keras
|
deep-learning
| 20,430
|
keras.layers.Concatenate fails with 5 dimensional tensors
|
When trying to use a Concatenate layer with two 5 dimensional tensors with all but the last dimension to 1, there is an "IndexError". In the provided example the result should be a tensor of shape (1, 1, 1, 1, 10). It works ok with <=4 dimensional tensors or when any of the first four dimension is not one.
Here is the code to reproduce the issue (python 3.11) :
```
import numpy as np
import tensorflow as tf
print("Tensorflow version = ", tf.__version__)
print("Keras version = ", tf.keras.__version__)
sys_details = tf.sysconfig.get_build_info()
cuda_version = sys_details["cuda_version"]
print("CUDA version = ", cuda_version)
cudnn_version = sys_details["cudnn_version"]
print("cuDNN version = ", cudnn_version)
tf.keras.layers.Concatenate()([
np.arange(5).reshape(1, 1, 1, 1, 5),
np.arange(5).reshape(1, 1, 1, 1, 5)
])
```
And here is the result
```
Tensorflow version = 2.18.0
Keras version = 3.6.0
CUDA version = 12.5.1
cuDNN version = 9
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
File ~/scripts/concatenate.py:12
9 cudnn_version = sys_details["cudnn_version"]
10 print("cuDNN version = ", cudnn_version)
---> 12 tf.keras.layers.Concatenate()([
13 np.arange(5).reshape(1, 1, 1, 1, 5),
14 np.arange(5).reshape(1, 1, 1, 1, 5)
15 ])
File ~/.conda/pa311tf/lib/python3.11/site-packages/keras/src/utils/traceback_utils.py:122, in filter_traceback.<locals>.error_handler(*args, **kwargs)
119 filtered_tb = _process_traceback_frames(e.__traceback__)
120 # To get the full stack trace, call:
121 # `keras.config.disable_traceback_filtering()`
--> 122 raise e.with_traceback(filtered_tb) from None
123 finally:
124 del filtered_tb
File ~/.conda/pa311tf/lib/python3.11/site-packages/keras/src/layers/merging/concatenate.py:70, in Concatenate.build(self, input_shape)
60 for axis, axis_value in enumerate(
61 reduced_inputs_shapes[i][1:], start=1
62 ):
(...)
67 # but if tensor shapes are not the same when
68 # calling, an exception will be raised.
69 if axis != concat_axis and axis_value == 1:
---> 70 del reduced_inputs_shapes[i][axis]
72 if len(reduced_inputs_shapes[i]) > self.axis:
73 del reduced_inputs_shapes[i][self.axis]
IndexError: list assignment index out of range
```
|
closed
|
2024-10-31T09:03:27Z
|
2024-11-01T02:37:14Z
|
https://github.com/keras-team/keras/issues/20430
|
[] |
pboutinaud
| 2
|
Farama-Foundation/Gymnasium
|
api
| 452
|
[Bug Report] Missing figure in environment creation
|
### Describe the bug
In the article https://gymnasium.farama.org/tutorials/gymnasium_basics/environment_creation/ we have this part:
_An episode in this environment (with size=5) might look like this:
where the blue dot is the agent and the red square represents the target.
Let us look at the source code of GridWorldEnv piece by piece:_
So env looks like what exactly? I suppose there should be a picture
### Code example
_No response_
### System info
Chrome browse last version
### Additional context
_No response_
### Checklist
- [X] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/Gymnasium/issues) in the repo
|
closed
|
2023-04-15T15:29:11Z
|
2023-11-29T13:57:03Z
|
https://github.com/Farama-Foundation/Gymnasium/issues/452
|
[
"bug"
] |
Alian3785
| 1
|
healthchecks/healthchecks
|
django
| 448
|
Support multiple schedules
|
The cron syntax is very nice, but what if I need to check for something running at 12:00 and 17:30 ? AFAIK this can't be done with the regular cron syntax and in a crontab I would have two lines. A solution would be to have mutiple schedules enabled for one check.
Great software by the way, very easy and useful.
|
closed
|
2020-11-12T12:41:50Z
|
2024-01-22T09:39:59Z
|
https://github.com/healthchecks/healthchecks/issues/448
|
[
"feature"
] |
adepertat
| 3
|
ClimbsRocks/auto_ml
|
scikit-learn
| 314
|
ValueError: Iteration of zero-sized operands is not enabled
|
Hi
I am getting this error
>
> About to fit the pipeline for the model GradientBoostingRegressor to predict s3diff
> Started at:
> 2017-08-16 08:46:46
> Traceback (most recent call last):
> File "<stdin>", line 1, in <module>
> File "C:\Anaconda3\lib\site-packages\auto_ml\predictor.py", line 566, in train
> self.trained_final_model = self.train_ml_estimator(estimator_names, self._scorer, X_df, y)
> File "C:\Anaconda3\lib\site-packages\auto_ml\predictor.py", line 993, in train_ml_estimator
> trained_final_model = self.fit_single_pipeline(X_df, y, estimator_names[0], feature_learning=feature_learning, prediction_interval=False)
> File "C:\Anaconda3\lib\site-packages\auto_ml\predictor.py", line 762, in fit_single_pipeline
> ppl.fit(X_df, y)
> File "C:\Anaconda3\lib\site-packages\auto_ml\utils_model_training.py", line 129, in fit
> best_model = deepcopy(self.model)
> File "C:\Anaconda3\lib\copy.py", line 182, in deepcopy
> y = _reconstruct(x, rv, 1, memo)
> File "C:\Anaconda3\lib\copy.py", line 297, in _reconstruct
> state = deepcopy(state, memo)
> File "C:\Anaconda3\lib\copy.py", line 155, in deepcopy
> y = copier(x, memo)
> File "C:\Anaconda3\lib\copy.py", line 243, in _deepcopy_dict
> y[deepcopy(key, memo)] = deepcopy(value, memo)
> File "C:\Anaconda3\lib\copy.py", line 166, in deepcopy
> y = copier(memo)
I am trying to use:
column_descriptions = {'s3diff': 'output'}
ml_predictor = Predictor(type_of_estimator='regressor', column_descriptions=column_descriptions)
ml_predictor.train(X_train)
Using Windows 10 64bit, PTVS, python 3.5 Anaconda distro numpy version 1.12.0
Here is a small sample of the data in csv fromat:
s1,s2,s3,z1,z2,z3,s3diff
0.013559228178703498,0.025673357785545674,0.054246621518769716,0.0019039451760423324,0.0023811203598460453,0.008979818636014408,-0.00044262142281056843
0.013565574350913752,0.02567456687888959,0.05427293659974362,0.0018950401329676898,0.002375412193837593,0.008973971795195984,-0.0005004106844857525
0.013570379967844295,0.02567511670143283,0.05429903502472458,0.001885574961102246,0.002369474695989751,0.008968100260109748,-0.0005579600955363807
0.013508054303235838,0.025634770837291182,0.05430893657538927,0.001878011458290236,0.002364296215424333,0.008962307212474966,-0.0005992894613873756
0.013511602609961712,0.025634481169883244,0.05433465609179345,0.001868753716462282,0.002358432348394757,0.00895644722429477,-0.0006588159796903567
0.013513700507681331,0.02563355565235777,0.05436016223855941,0.001859419647459849,0.002352536562359934,0.008950592602950922,-0.0007156974742899835
0.013514391305573359,0.025632003766165676,0.054385456175574456,0.0018503606695955683,0.0023467437851787797,0.00894474510140853,-0.0007723436742973419
0.013513716868915206,0.025629834837153337,0.05441053905442364,0.0018413902979262547,0.0023409881948736736,0.008938919281015862,-0.0008287557548883745
0.013450119553814184,0.025587882382037484,0.054419595962235286,0.001854352953255216,0.002343926496141209,0.00893375061056522,-0.0008691188267534594
0.013386777814905954,0.025545849267344527,0.05442852055333747,0.0018508048645047013,0.002340393603631607,0.008928115639926697,-0.0009093265677538179
0.013323691348976123,0.02550373737629834,0.05443731348446538,0.0018416924640004707,0.0023346113381929354,0.00892230270352201,-0.0009493796581290725
0.013260859838861815,0.0254615485701288,0.05444597540841335,0.001835304304074932,0.002329948111117238,0.008916590220728613,-0.0009892787741490603
0.013198282953775365,0.025419284688339143,0.054454506974065724,0.0018288390671686878,0.002325261752092658,0.00891088097130443,-0.0010290245881449042
0.01313596034962186,0.025376947548969384,0.05446290882642744,0.0018223161630460574,0.002320559500834023,0.00890517549641675,-0.0010686177685395107
0.013073891669310734,0.025334538948855814,0.054471181606654345,0.0018132261856873175,0.002314754709712873,0.008899370877674928,-0.0011080589798779306
0.01301207654306164,0.025292060663886902,0.05447932595208324,0.001804296587586618,0.0023090425250402204,0.008893586829951936,-0.0011473488828574324
0.012950514588704706,0.02524951444925555,0.054487342496261394,0.0018060599788262445,0.0023073132657298033,0.008888017596813326,-0.001186488134357025
0.012889205411974951,0.025206902039707345,0.0544952318689762,0.0017980637482124453,0.002301992356231753,0.008882278618614799,-0.0012278338231271546
0.012828148606801568,0.025164225149785897,0.0545029946962841,0.0018081866368705492,0.002303532575535176,0.008876923566419309,-0.0012690234464503086
0.012767343755591552,0.02512148547407413,0.05451063160053942,0.0017991229857606476,0.0022977632192500924,0.008871148028440896,-0.0013077024835957893
0.012769012369507336,0.025116900259796127,0.05453364766297649,0.001813058039734092,0.002300903393992256,0.008865907504424043,-0.001361737950984966
0.012769360795593728,0.02511175543442325,0.05455646534993494,0.0018040366016910095,0.0022951689854805012,0.008860147479516639,-0.0014179000626484766
0.012708223794945827,0.025068308668128217,0.054563646810826434,0.001854930281172579,0.0023133418414137785,0.008856113699041393,-0.001458396584914777
0.012647349666553138,0.02502481013338672,0.054570705112575536,0.0018456335914751106,0.002307550562025229,0.008850359821925577,-0.0014963941258895749
0.012586737761005141,0.024981261384899724,0.05457764085771905,0.001836421055512864,0.002301779200491789,0.00884460816485268,-0.0015342465036939856
0.012526387421731532,0.02493766395929931,0.05458445464527198,0.0018272159564852665,0.002296014554570868,0.00883886432470322,-0.0015742934058071074
0.012466297985204895,0.02489401937535991,0.054591147070754416,0.0018187816574252059,0.002290587473257975,0.00883316094611504,-0.0016141897380302706
0.012406468781139459,0.02485032913420673,0.05459771872621798,0.0018105437265131766,0.00228524120194514,0.008827469027375913,-0.0016539361338125185
0.012346899132686176,0.02480659471952132,0.054604170200272084,0.001801895176806887,0.0022797185778127608,0.008821765538278467,-0.001693533223073078
0.012287588356624088,0.02476281759774416,0.05461050207810991,0.0017928632816352238,0.0022740109188885125,0.008816044195852147,-0.0017329816322275982
0.012228535763548121,0.024718999218274758,0.05461671494153414,0.0017839182914947218,0.002268343284774917,0.008810332962087566,-0.0017722819842141762
0.012190380524813702,0.02465822807576224,0.05460996197769378,0.0019530591920482493,0.0023357445281910224,0.008809790269395691,-0.0017985875072303317
0.012152027320046672,0.0245976007206724,0.054603144695083636,0.0019664756012967802,0.002340150370267237,0.008804882473128146,-0.0018247997507327487
0.012113484266560823,0.02453711685627856,0.05459626340172515,0.001959129136024505,0.00233555169661513,0.008799300862588592,-0.0018509190636184086
0.012074759291862277,0.02447677618611344,0.05458931840408654,0.0019567204470849696,0.002333181929725293,0.00879390128802838,-0.0018769457931449227
0.012035860138359307,0.024416578413972264,0.054582310007092336,0.0019470489834489363,0.0023273550909765164,0.008788205267805915,-0.0019028802849402823
0.011996794367930216,0.024356523243915875,0.05457523851413292,0.0019373358731604809,0.002321526918039432,0.008782516719558231,-0.0019287228830126985
0.011957569366356514,0.024296610380273663,0.05456810422707402,0.0019284450195471649,0.0023159414778455146,0.008776834599408973,-0.001954473929760288
0.011918192347624429,0.024236839527646638,0.05456090744626599,0.0019187787987123247,0.0023101536878058467,0.008771162173124231,-0.0019801337659806004
0.01187867035810077,0.02417721039091034,0.05455364847055322,0.0019118473851675265,0.0023052777511997674,0.008765521856344589,-0.0020057027308802503
0.011839010280586716,0.02411772267521776,0.054546327597283326,0.0019024739097574879,0.0022995430708183467,0.008759851856477594,-0.0020311811620843667
0.0117992188382551,0.02405837608600221,0.0545389451223163,0.0019478286296415687,0.0023158205017795074,0.00875554451505256,-0.0020542397042248306
0.011759302598474571,0.0239991703289802,0.05453150134003368,0.0019522279747085766,0.0023156544663118107,0.008750203451535377,-0.0020772149670288263
0.011719267976524398,0.023940105110154174,0.054523996543347504,0.0019456456015733415,0.0023113447072803553,0.008744687077025966,-0.0021001072652104963
0.011665441745914405,0.023898199505075355,0.054529057132776684,0.001991403833854338,0.0023285381890272064,0.008740479055462037,-0.0021355430210650357
0.011625842905863736,0.02383933858874879,0.05452140188835389,0.0020378207771872816,0.0023474338278313637,0.008736675334207097,-0.0021582410363757553
0.011586130156010021,0.023780617772820888,0.0545136865923974,0.002028293472634156,0.002341925995886439,0.008731080091650491,-0.002183168423070349
0.011546309626009728,0.02372203676142736,0.054505911531625435,0.002024640394274643,0.0023391102193999185,0.00872570870756183,-0.002208007662315263
0.011506387303752929,0.023663595259025963,0.054498076991333286,0.0020705932140323577,0.0023584836778079753,0.008721984691592308,-0.0022304488641252465
0.011504820296663262,0.02360474283407136,0.0544848006629512,0.002064819672853339,0.002354332521022209,0.008716440667074618,-0.0022474264863170945
0.011464188221225273,0.023546596862962482,0.05447686148324553,0.0020566020102002203,0.002349554492810748,0.00871093391482995,-0.00226971948720274
0.011423480130407972,0.02348858935117801,0.05446886362827441,0.0020467950646676594,0.0023439874858893137,0.00870536120024183,-0.0022919320643605
0.011382701311019143,0.023430720005828846,0.05446080737791378,0.002043408266960286,0.002340796877843424,0.008699886219496866,-0.0023163601218715857
0.01134185692765897,0.0233729885343492,0.054452693010658006,0.002033166266561129,0.002334945378918573,0.008694287553799626,-0.002338406819134019
0.01130095202551085,0.023315394644499002,0.054444520803628084,0.0020230756316765575,0.0023291123954945663,0.008688687191518248,-0.0023603740066865803
0.011259991533057907,0.023257938044366232,0.054436291032579916,0.0020204429079727373,0.0023262424044646515,0.008683245217363236,-0.002384549782759357
0.011218980264728309,0.02320061844236921,0.05442800397191258,0.002010497101308475,0.002320458155020935,0.008677654426110541,-0.0024086402101821183
0.01117792292347104,0.023143435547258832,0.05441965989467639,0.002004231074534517,0.002316112947665437,0.00867213627600161,-0.0024326456010103077
0.011136824103264628,0.02308638906812087,0.0544112590725809,0.0019949680295926683,0.002310576719871535,0.008666561923945137,-0.002456566265877584
0.01109568829156093,0.02302947871437809,0.05440280177600298,0.001985060691094221,0.00230480815674262,0.008660987087134692,-0.002480402514004117
0.011054519871665624,0.02297270419579251,0.05439428827399482,0.0019765417376367946,0.002299556680865311,0.00865543464790217,-0.0025041546532047487
0.011013323125058441,0.02291606522246749,0.054385718834291635,0.001970013650265254,0.0022953984308796103,0.008650018536732557,-0.002527822989897039
0.010972102233653344,0.02285956150484988,0.05437709372331969,0.0019614439280609707,0.0022903121731772487,0.00864453094351951,-0.002551407829109412
0.010930861282002349,0.02280319275373209,0.054368413206203926,0.0019516264326002086,0.0022845762860461386,0.008638979172358997,-0.002574909474489025
0.0108896042594434,0.022746958680254187,0.054359677546775766,0.0019418636474715996,0.0022788552618840754,0.008633432148897459,-0.002598328228309805
0.010848335062193992,0.02269085899590584,0.05435088700758075,0.0019321441212167376,0.002273148111783883,0.008627890630330086,-0.0026216643914802953
0.010807057495393476,0.02263489341252843,0.054342041849886114,0.0019224732267363015,0.002267455276143031,0.008622354228143841,-0.002644918263551427
0.010765775275094008,0.02257906164231696,0.05433314233368836,0.0019128507237700433,0.0022617767192683264,0.008616822934457443,-0.002668090142724344
0.010724492030202508,0.022523363397822026,0.05432418871772077,0.001903262467636266,0.002256116109872566,0.008611299141820985,-0.00269118032585812
0.010683211304375084,0.022467798391951756,0.0543151812594608,0.001893741850811182,0.002250466521611572,0.008605777668239945,-0.0027141891084773023
0.010641936557865059,0.02241236633797363,0.054306120215137484,0.001884263081207333,0.0022448305671146297,0.008600261669814575,-0.0027371167847796423
0.010600671169326285,0.02235706694951648,0.05429700583973879,0.0018748268603758398,0.0022392086203049575,0.008594751179955071,-0.0027599636476436096
0.010559418437572498,0.02230189994057221,0.05428783838701888,0.0018654378343998027,0.002233600766655965,0.008589245765984615,-0.002782729988635796
0.010518181583294376,0.02224686502549764,0.054278618109505336,0.0019087983842447863,0.0022508229746264034,0.008585411511030979,-0.0028054160980184803
0.010570763458137095,0.022197162880254626,0.05425955890603258,0.0018992683712523407,0.0022452313012324017,0.008579925360587036,-0.002818235912283154
0.010620487961520366,0.02214753866047561,0.05424048257720775,0.0018897512344228503,0.0022396109066875045,0.008574436285762281,-0.002831011269616664
0.010610768351900128,0.02209263983972696,0.05422597748170997,0.0018900770197416519,0.002237804071939397,0.008569143246581739,-0.002848330566200638
0.01056723731066494,0.02203818106329757,0.05421661060995624,0.0018818892244387253,0.002232859678600047,0.008563731056232794,-0.0028707608298776943
0.010523771352359731,0.021983852256858602,0.05420719196317855,0.0018835030322470767,0.002231601863692089,0.008558479967097094,-0.0028933403074783014
0.0104803726789851,0.0219296531431302,0.05419772178750642,0.0018753446399850848,0.002226658397005774,0.008553075574545299,-0.002915461931510592
0.01043704343690293,0.021875583445160263,0.05418820032788741,0.0018659531395867773,0.0022210821718473685,0.008547609590344512,-0.0029375055412217896
0.010393785717986392,0.021821642886326476,0.05417862782809407,0.0018572069613044355,0.0022158507068105932,0.008542187589241716,-0.0029594714164015296
0.010350601560744332,0.02176783119033822,0.05416900453073065,0.001847932316766983,0.0022103387415268023,0.008536739676137191,-0.0029813598355985862
0.01030749295142123,0.02171414808123859,0.05415933067723993,0.0018386886707716984,0.0022048036175278413,0.008531287960187496,-0.003003171076127839
0.010264461825073012,0.021660593283406283,0.05414960650790991,0.001830150877409216,0.0021996470576324577,0.008525884090973315,-0.003024905414077205
0.010310465021048472,0.021612565266183503,0.05413019820821178,0.001839956044051295,0.0022024874727104766,0.008521081824488363,-0.0030343558363755962
0.010353884977568095,0.02156460927909309,0.05411077440689032,0.0018307358212517179,0.0021969865145398198,0.008515650362975972,-0.003046340779716335
0.010394815863108527,0.02151672552692465,0.05409133520688726,0.0018215686579047645,0.0021914852118905107,0.008510219864373005,-0.0030582838385442845
0.0104333471701438,0.021468914210741517,0.05407188071067569,0.001815838880576729,0.002187627021412121,0.008504941217645742,-0.003070185150859897
0.010469564044330923,0.021421175527918758,0.05405241102026247,0.0018068122253075468,0.002182221298956184,0.008499533363662725,-0.00308204485412613
0.010503547583450288,0.021373509672180973,0.05403292623719054,0.00179785283733799,0.00217684292663273,0.00849413218091673,-0.0030938630852709176
0.010535375109487896,0.021325916833639488,0.05401342646254133,0.0017903552471015796,0.002172174780903196,0.008488800703703291,-0.0031030742555169387
0.01056512041679863,0.021278397198829284,0.05399391179693703,0.0017816062128506703,0.0021667779723192945,0.008483390233506917,-0.0031122513834338297
0.010592853998920557,0.021230950950745675,0.05397438234054292,0.001773046097260652,0.002161587937738016,0.00847801995141834,-0.003121394584894452
0.01061864325628046,0.02118357826888032,0.053954838193069635,0.0017665041693398407,0.00215733788488284,0.008472734359764365,-0.0031305039753074665
0.010642552686760098,0.021136279329257168,0.05393527945377556,0.0017579910542752837,0.0021520094866035065,0.00846733844880301,-0.003139579669619634
0.010664644060855504,0.021089054304467912,0.053915706221469004,0.0017505645764610113,0.002147346343272366,0.00846202689982342,-0.003148621782318034
0.0108473309915792,0.02107012566870884,0.053888961163485634,0.0018621749127376856,0.0021936119286705413,0.008460260314797949,-0.003150472996407347
0.011020635526236448,0.02105093669777698,0.05386222491472418,0.001853712916911929,0.0021886739011232528,0.008454946474122055,-0.003152313962445641
0.01118514614667312,0.021031491276802847,0.053835497495069336,0.0018599750971050274,0.0021905328785648,0.008450159004262795,-0.0031541447159671447
|
closed
|
2017-08-16T07:54:49Z
|
2017-09-09T02:12:52Z
|
https://github.com/ClimbsRocks/auto_ml/issues/314
|
[] |
azuric
| 4
|
JaidedAI/EasyOCR
|
deep-learning
| 951
|
EasyOCR not able to differentiate character '5' from '6'
|
Using the default EasyOCR reader below:
reader = easyocr.Reader(['en'], gpu=True)
To 'readText' the following image:
'351'

It returns '361' with a very high confidence level of 99.9% instead of '351'.
Any idea for reading numerical characters accurately, another model other than 'en' is preferred or have to train a new model for this?
|
open
|
2023-02-16T07:16:20Z
|
2023-02-16T07:16:20Z
|
https://github.com/JaidedAI/EasyOCR/issues/951
|
[] |
bedexter82
| 0
|
psf/black
|
python
| 4,315
|
f-string internal formatting to be consistent with non-f-string formatting
|
**Is your feature request related to a problem? Please describe.**
Consider a simple python script:
```python
import numpy as np
a=np.arange(9).reshape((3,3))
print(a[1,2])
print(f"a[1,2] = {a[1,2]}")
```
Running black on this it would by default provide:
```
$ black --diff a.py
--- a.py 2024-04-19 01:29:28.562943+00:00
+++ a.py 2024-04-19 01:29:52.064944+00:00
@@ -1,4 +1,5 @@
import numpy as np
-a=np.arange(9).reshape((3,3))
-print(a[1,2])
+
+a = np.arange(9).reshape((3, 3))
+print(a[1, 2])
print(f"a[1,2] = {a[1,2]}")
would reformat a.py
All done! ✨ 🍰 ✨
1 file would be reformatted.
```
You can see the final two print statements are inconsistent formatting now. The first has inserted a space after the comma in `a[1,2]`, but, the f-string has not adjusted the formatting of `{a[1,2]}` to be `{a[1, 2]}`.
If I accept the formatting black suggests, and then run flake8 on the same piece of code, flake8 will complain about the lack of a space after the comma in the f-string.
```
$ black a.py
reformatted a.py
All done! ✨ 🍰 ✨
1 file reformatted.
$ flake8 a.py
a.py:5:22: E231 missing whitespace after ','
```
**Describe the solution you'd like**
Formatting applied to code outside of f-strings should also apply to the code within the `{..}` in the f-strings.
**Describe alternatives you've considered**
Disable that warning in flake8.
**Additional context**
N/A.
|
closed
|
2024-04-19T01:32:49Z
|
2024-04-19T01:36:39Z
|
https://github.com/psf/black/issues/4315
|
[
"T: enhancement",
"F: strings"
] |
ColemanTom
| 1
|
ivy-llc/ivy
|
pytorch
| 28,670
|
Fix Frontend Failing Test: tensorflow - linalg.jax.lax.linalg.qr
|
To-do List: https://github.com/unifyai/ivy/issues/27499
|
closed
|
2024-03-23T16:43:50Z
|
2024-03-27T18:16:59Z
|
https://github.com/ivy-llc/ivy/issues/28670
|
[
"Sub Task"
] |
ZJay07
| 0
|
huggingface/diffusers
|
deep-learning
| 10,817
|
auto_pipeline missing SD3 contol nets
|
### Describe the bug
Hey, auto_pipeline seesm to be missing the control nets variants for SD3
venv\Lib\site-packages\diffusers\pipelines\auto_pipeline.py
### Reproduction
Load an sd3 model checkpoint with a controlnet loading any of the auto pipes you will just get the none control net variations as its not set in the configuration.
### Logs
```shell
```
### System Info
- 🤗 Diffusers version: 0.32.2
- Platform: Windows-10-10.0.19045-SP0
- Running on Google Colab?: No
- Python version: 3.12.7
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.27.1
- Transformers version: 4.48.0
- Accelerate version: 1.2.1
- PEFT version: not installed
- Bitsandbytes version: 0.45.2
- Safetensors version: 0.5.2
- xFormers version: not installed
- Accelerator: NVIDIA GeForce RTX 3080 Ti, 12288 MiB
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
|
closed
|
2025-02-18T12:54:40Z
|
2025-02-24T16:21:03Z
|
https://github.com/huggingface/diffusers/issues/10817
|
[
"bug",
"help wanted",
"contributions-welcome"
] |
JoeGaffney
| 3
|
tensorflow/datasets
|
numpy
| 5,538
|
[data request] <telecoms network performance>
|
* Name of dataset: <name>
* URL of dataset: <url>
* License of dataset: <license type>
* Short description of dataset and use case(s): <description>
Folks who would also like to see this dataset in `tensorflow/datasets`, please thumbs-up so the developers can know which requests to prioritize.
And if you'd like to contribute the dataset (thank you!), see our [guide to adding a dataset](https://github.com/tensorflow/datasets/blob/master/docs/add_dataset.md).
|
closed
|
2024-07-27T16:28:48Z
|
2024-07-29T08:59:00Z
|
https://github.com/tensorflow/datasets/issues/5538
|
[
"dataset request"
] |
MngomeT
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.