repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
sqlalchemy/sqlalchemy
|
sqlalchemy
| 10,780
|
name with flat for aliased()
|
### Describe the use case
aliased() currently prohibits a name in conjunction with `flat=True`. Presumably this is because it would require some arbitrary naming convention. In reality, `flat=True` already has an arbitrary naming convention (suffix with numbers) which is useful for the query compiler, but not so much for a human reading the generated SQL. This is my use case. I've built a dynamic query builder that often results in some very complex queries that need to be debugged and the auto-aliased names are often hard to keep track of. I've been using the change in
https://github.com/zzzeek/sqlalchemy/pull/513
for over a year and it has been quite helpful with no problems. I tried adding a new parameter named flat_prefix, but was in over my head and couldn't make it work. I will work up a PR for the rel_2_0 branch if the approach I demonstrate below is acceptable.
### Databases / Backends / Drivers targeted
all
### Example Use
```
from sqlalchemy import *
from sqlalchemy.orm import *
Base = declarative_base()
class Entity(Base):
id = Column(Integer, primary_key=True)
polymorphic_type = Column(Text, nullable=False)
name = Column(Text, nullable=False)
__tablename__ = 'entity'
__mapper_args__ = {'polymorphic_on': 'polymorphic_type'}
class Company(Entity):
id = Column(Integer, ForeignKey('entity.id'), primary_key=True)
industry = Column(Text, nullable=False)
__tablename__ = 'company'
__mapper_args__ = {'polymorphic_identity': __tablename__}
class Employee(Entity):
id = Column(Integer, ForeignKey('entity.id'), primary_key=True)
title = Column(Text, nullable=False)
company_id = Column(Integer, ForeignKey('company.id'), nullable=False)
company = relationship(
'Company',
foreign_keys=company_id,
backref=backref('employees', order_by='Employee.name'),
)
__tablename__ = 'employee'
__mapper_args__ = {'polymorphic_identity': __tablename__}
engine = create_engine('postgresql:///flat_prefix')
Session = sessionmaker(bind=engine)
session = Session()
Base.metadata.create_all(engine)
vendor = Company(name='XYZ Widgets', industry='Manufacturing')
customer = Company(name='ABCMart', industry='Retail')
vendor.employees = \
[Employee(name='Employee{i}', title='Production line') for i in range(5)]
customer.employees = \
[Employee(name='Employee{i}', title='Sales Clerk') for i in range(5)]
session.add_all((vendor, customer))
session.commit()
query = (
session
.query(Company.name, Employee.name)
.join(Employee, Company.employees)
.filter(Company.industry == 'Manufacturing')
)
assert str(query) == \
'''SELECT entity.name AS entity_name, entity_1.name AS entity_1_name
FROM entity JOIN company ON entity.id = company.id JOIN (entity AS entity_1 JOIN employee AS employee_1 ON entity_1.id = employee_1.id) ON company.id = employee_1.company_id
WHERE company.industry = %(industry_1)s'''
print('It can be very difficult to keep track of which entity alias is which')
vendor = aliased(Company, name='vendor', flat=True)
worker = aliased(Employee, name='worker', flat=True)
query = (
session
.query(vendor.name, worker.name)
.join(worker, vendor.employees)
.filter(vendor.industry == 'Manufacturing')
)
assert str(query) == \
'''SELECT vendor_entity.name AS vendor_entity_name, worker_entity.name AS worker_entity_name
FROM entity AS vendor_entity JOIN company AS vendor_company ON vendor_entity.id = vendor_company.id JOIN (entity AS worker_entity JOIN employee AS worker_employee ON worker_entity.id = worker_employee.id) ON vendor_company.id = worker_employee.company_id
WHERE vendor_company.industry = %(industry_1)s'''
print('Much easier to parse (by a human)!')
```
### Additional context
_No response_
|
open
|
2023-12-19T18:26:30Z
|
2023-12-19T21:18:10Z
|
https://github.com/sqlalchemy/sqlalchemy/issues/10780
|
[
"sql",
"use case"
] |
ericatkin
| 3
|
waditu/tushare
|
pandas
| 972
|
用Rstudio调取接口数据为空?积分是正好够的
|
closed
|
2019-03-22T09:43:04Z
|
2019-03-25T13:10:56Z
|
https://github.com/waditu/tushare/issues/972
|
[] |
daisyldf
| 2
|
|
pytorch/pytorch
|
machine-learning
| 149,502
|
[Inductor] register_module_forward_pre_hook lead to compiled model produce wrong inference results
|
### 🐛 Describe the bug
Given the same inputs, the inference results for the compiled models were not equivalent to the original model before/after the execution of `register_module_forward_pre_hook(pre_hook)` ,
Such results are bizarre!
```python
import torch
model = torch.nn.Sequential(
torch.nn.Linear(10, 5),
torch.nn.ReLU(),
torch.nn.Linear(5, 2)
)
inputs = torch.arange(10, dtype=torch.float32).unsqueeze(0)
res1 = model(inputs)
print(f"original inference results: {res1}")
def pre_hook(module, input):
modified_input = input[0] + 1.0
return (modified_input,)
handle = torch.nn.modules.module.register_module_forward_pre_hook(pre_hook)
res2 = model(inputs)
print(f"inference results after hook: {res2}")
#handle.remove()
compiled_model = torch.compile(model, backend='inductor')
with torch.no_grad():
compiled_out = compiled_model(inputs)
print(f"inference results with compiled model {compiled_out}")
torch.testing.assert_close(res2, compiled_out)
```
### Outputs
```
original inference results: tensor([[-0.8701, 0.1359]], grad_fn=<AddmmBackward0>)
inference results after hook: tensor([[-1.4718, 0.5898]], grad_fn=<AddmmBackward0>)
inference results with compiled model tensor([[-1.4539, 0.4481]])
Traceback (most recent call last):
File "/data/qshenaf/remote_pc/LLM4Converter/bugs/0319/torch.linalg.matrix_rank.py", line 23, in <module>
torch.testing.assert_close(res2, compiled_out)
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/testing/_comparison.py", line 1519, in assert_close
raise error_metas[0].to_error(msg)
AssertionError: Tensor-likes are not close!
Mismatched elements: 2 / 2 (100.0%)
Greatest absolute difference: 0.14176997542381287 at index (0, 1) (up to 1e-05 allowed)
Greatest relative difference: 0.3164082467556 at index (0, 1) (up to 1.3e-06 allowed)
```
### Error logs
_No response_
### Versions
PyTorch version: 2.7.0.dev20250308+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: AlmaLinux 9.4 (Seafoam Ocelot) (x86_64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Clang version: 17.0.6 (AlmaLinux OS Foundation 17.0.6-5.el9)
CMake version: version 3.26.5
Libc version: glibc-2.34
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 81%
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 7000.16
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250308+cu126
[pip3] torchaudio==2.6.0.dev20250308+cu126
[pip3] torchvision==0.22.0.dev20250308+cu126
[pip3] triton==3.2.0
[conda] numpy 2.2.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.25.1 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git4b3bb1f8 pypi_0 pypi
[conda] torch 2.7.0.dev20250308+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250308+cu126 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250308+cu126 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @bdhirsh
|
open
|
2025-03-19T11:13:23Z
|
2025-03-19T22:03:42Z
|
https://github.com/pytorch/pytorch/issues/149502
|
[
"high priority",
"triage review",
"oncall: pt2",
"module: pt2-dispatcher"
] |
Cookiee235
| 2
|
mars-project/mars
|
pandas
| 3,366
|
[BUG]Build fails under Windows platform
|
**Bug description**
The MSVC team recently added Mars as part of RWC testing to detect compiler regression. Seems the project will fail to build under Windows due to error C1189: #error: unsupported platform. Could you please take a look?
**To Reproduce**
1. Open VS2022 x64 Tools command .
2. git clone C:\gitP\Tencent\mars C:\gitP\Tencent\mars(The commit SHA we use is 6c71f72)
3. Build project from scratch.
**Expected behavior**
Build passed.
**Additional context**
The problem seems to be that some compilation errors occurred when compiling the Mars project using Visual Studio 2022, which involved some header files of the OpenSSL library, resulting in error C1189: unsupported platform error.
[Build (3).log](https://github.com/mars-project/mars/files/15282079/Build.3.log)
Attached is the build log.
We found the problematic header file and found that line 16 caused the error. We have applied a patch to fix this issue.
[Mars_platform_fix.patch](https://github.com/mars-project/mars/files/15282111/Mars_platform_fix.patch)
If you need more information or have any questions, please leave a message under this issue.
|
open
|
2024-05-11T08:18:43Z
|
2024-05-14T02:39:33Z
|
https://github.com/mars-project/mars/issues/3366
|
[] |
brianGriifin114
| 1
|
CorentinJ/Real-Time-Voice-Cloning
|
pytorch
| 611
|
Quality of the voice
|
Hi, I am trying to clone the voice of famous people like Abdul Kalam, Modi. I collected their speeches from youtube videos. But the quality of the voice is very low. There is no similarity between the voice generated by the model and the target's voice.
I am attaching the generated audio file(.wav) and the audio file which i used for training(.mp3)
https://drive.google.com/file/d/1ESAdcecXgAkIjZMKWsE236pkcVv7YDco/view?usp=sharing, https://drive.google.com/file/d/1c99n-iD36q6cHoir4S9qviCGe-D9wZvL/view?usp=sharing, https://drive.google.com/file/d/1y_RtrygSIBqI-ODYy80AkYE8kSdlSaHB/view?usp=sharing
|
closed
|
2020-11-30T00:18:59Z
|
2020-12-05T08:04:57Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/611
|
[] |
sriharsha0806
| 2
|
google-research/bert
|
nlp
| 631
|
Train with bert_multi_cased OOM error, but works with bert_cased
|
Hi guys!
I'm working on a project where, basically, I add a CNN + maxpool + dense layer with softmax after the bert embeddings, to perform classification on different datasets. I'm running this locally on my computer, on my GPU. Until now, I was only working with English datasets (such as the TREC question dataset - which is not even that big with ~6000 examples), so I was using the bert_cased model (loaded from tfhub with tensorflow_hub - trainable), and it was working fine. But since eventually I'll have some other datasets in other languages, I made the switch to the bert_multi_cased model.
When I do this, however, I get an OOM error. If I switch back to bert_cased, again, no problem. But if I try to use bert_multi_cased, OOM. I tried reducing batch size from 4 to 1 even, but I still get OOM (I know, 4 is not a stupendous batch size either, but it's getting the job done for now).
From the documentation, if I understand correctly, both are supposed to have 110M parameters in all, so I fail to see why multi_cased would need that much more memory that even reducing batch size to 1 wouldn't solve.
So, am I missing something? What could be causing bert_multi_cased to give me an OOM when bert_cased works fine (and with a 4x bigger batch even)?
For info, my GPU is a Geforce GTX 1060 6GB, using python 3.6.5, bert-tensorflow 1.0.1, tensorflow-gpu 1.13.1, and tensorflow-hub 0.4.0. I based most of my code from the "predicting movie reviews with bert on tf hub" notebook that's here on this github, so I'm using estimators and all that. And here's the error it gives (I hid the traceback since it didn't look very informative - basically saying the error originated from estimator.train - but if anyone wants, I can post it here):
>2019-05-09 00:07:32.176320: I tensorflow/core/common_runtime/bfc_allocator.cc:647] Stats:
>
> Limit: 4945621811
> InUse: 4651862784
> MaxInUse: 4651865344
> NumAllocs: 1445
> MaxAllocSize: 367248384
>
> OP_REQUIRES failed at resource_variable_ops.cc:593 : Resource exhausted: OOM when allocating tensor with shape[119547,768] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
> [...]
> tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[119547,768] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
> [[{{node module_apply_tokens/bert/embeddings/embedding_lookup}}]]
> Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
>
> [[{{node loss/Mean}}]]
> Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
Any suggestions are welcome!
|
open
|
2019-05-09T05:30:30Z
|
2019-06-20T02:50:42Z
|
https://github.com/google-research/bert/issues/631
|
[] |
bernardoccordeiro
| 1
|
nonebot/nonebot2
|
fastapi
| 2,461
|
Docs: 建议插件商店新增一个更新排序
|
### 希望能解决的问题
有时候一些老旧的插件突然又更新了,可以从最近更新优先级进去看看更新了啥
### 描述所需要的功能
RT
|
closed
|
2023-11-20T14:22:37Z
|
2025-02-26T15:05:08Z
|
https://github.com/nonebot/nonebot2/issues/2461
|
[
"documentation"
] |
mmmjie
| 1
|
ymcui/Chinese-LLaMA-Alpaca
|
nlp
| 862
|
扩充词表后加载
|
### 提交前必须检查以下项目
- [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
- [X] 由于相关依赖频繁更新,请确保按照[Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki)中的相关步骤执行
- [X] 我已阅读[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案
- [X] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)、[LlamaChat](https://github.com/alexrozanski/LlamaChat)等,同时建议到对应的项目中查找解决方案
- [X] 模型正确性检查:务必检查模型的[SHA256.md](https://github.com/ymcui/Chinese-LLaMA-Alpaca/blob/main/SHA256.md),模型不对的情况下无法保证效果和正常运行
### 问题类型
None
### 基础模型
None
### 操作系统
None
### 详细描述问题
扩充词表后得到merge_hf,替换原来llama2的tokenizer文件,使用llama.cpp量化可以成功,但是加载模型时候报错
### 依赖情况(代码类问题务必提供)
```
# 请在此处粘贴依赖情况
```
llama.cpp
### 运行日志或截图
```
# 请在此处粘贴运行日志
```
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = mostly Q5_0
llm_load_print_meta: model params = 6.74 B
llm_load_print_meta: model size = 4.33 GiB (5.52 BPW)
llm_load_print_meta: general.name = LLaMA v2
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: PAD token = 0 '<unk>'
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_tensors: ggml ctx size = 0.09 MB
error loading model: create_tensor: tensor 'token_embd.weight' has wrong shape; expected 4096, 49953, got 4096, 32000, 1, 1
llama_load_model_from_file: failed to load model
Traceback (most recent call last):
File "/home/aiproject/aiqa.py", line 2, in <module>
from llm_file import llm,answer_func
File "/home/aiproject/llm_file.py", line 24, in <module>
llm=llm_init()
File "/home/aiproject/llm_file.py", line 18, in llm_init
llm = Llama(model_path=model_path, n_ctx=n_ctx, n_batch=n_batch, n_threads=N_THREADS)
File "/home/miniconda3/lib/python3.10/site-packages/llama_cpp/llama.py", line 365, in __init__
assert self.model is not None
AssertionError
|
closed
|
2023-10-30T03:58:32Z
|
2023-11-15T00:36:40Z
|
https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/862
|
[
"stale"
] |
wzg-zhuo
| 3
|
pallets/quart
|
asyncio
| 375
|
Type checking has been failing in CI for the last 6 months
|
Type checking doesn't pass when running tests for PRs
https://github.com/pallets/quart/actions/workflows/tests.yaml
https://github.com/pallets/quart/actions/runs/9331655216
|
closed
|
2024-11-13T22:16:19Z
|
2024-11-29T00:26:12Z
|
https://github.com/pallets/quart/issues/375
|
[] |
JamesParrott
| 5
|
hankcs/HanLP
|
nlp
| 1,339
|
HanLP多实例魔改中
|
由于早期设计局限,目前HanLP的`CustomDictionary`、`CoreDictionary`、`CoreBiGramTableDictionary`等都是静态资源类。而一些应用场景要求加载不同的词典,比如同一个JVM中不同用户实例,或者不同领域下加载不同的bigram模型。由于个人时间有限,这个功能让大家久等了。
现在,所有静态资源类正在逐步改造中。目前的进度如下:
- [x] `CustomDictionary`重构完毕
- 如果你不需要多实例,无需任何改动,1.x保持前向兼容
- 如果你需要多实例,可以为分词器`segment`或`analyzer`创建一个新的`DynamicCustomDictionary`实例,并且调用该实例的`insert`方法。
- 即`segment.customDictionary = new DynamicCustomDictionary("词典1.txt", "词典2.txt")`
- 然后`segment.customDictionary.insert`
- 参考[demo](https://github.com/hankcs/HanLP/blob/74e6d7457b02ab872aa24c8476bf0b4449d8650e/src/test/java/com/hankcs/demo/DemoCustomDictionary.java#L70)
- [ ] `CoreDictionary`重构中
- [ ] `CoreBiGramTableDictionary`重构中
|
closed
|
2019-12-05T05:15:29Z
|
2023-03-22T06:20:13Z
|
https://github.com/hankcs/HanLP/issues/1339
|
[
"ignored"
] |
hankcs
| 10
|
JaidedAI/EasyOCR
|
machine-learning
| 619
|
Doesn't it support macos?
|
My system is macOS Monterey, and the computer is mbp 2019 16inch. I tried to install it using both `pip install easyocr` and `pip install git+git://github.com/jaidedai/easyocr.git`.But just when I run `import easyocr`,zsh says "segmentation fault". I wonder if it supports macOS and whether I need to install it manually?
|
closed
|
2021-12-10T13:49:45Z
|
2021-12-10T14:03:36Z
|
https://github.com/JaidedAI/EasyOCR/issues/619
|
[] |
hengyuekang
| 1
|
lucidrains/vit-pytorch
|
computer-vision
| 281
|
Potential regression with PT 2.0 and CUDA 12.2/CuDNN 8.9.4
|
Hi we are benchmarking ViT on H100 GPUs but found it's slower with newer CUDA/CUDNN recommended by Nvidia.
PyTorch version: 2.0.1
model config: default (large)
datatype: bf16
### CUDA 12.2 + CUDNN 8.9.4 , avg throughput: 94.95987583, 3 runs:
"throughput": 95.0631667348959
"throughput": 94.94235140772204
"throughput": 94.87410935279841
### CUDA 11.8 + CUDNN 8.9.2 , avg throughput: 106.193065, 3 runs:
"throughput": 106.35817540604774
"throughput": 106.06269191571305
"throughput": 106.15832773232279
This is unexpected, is there some potential regression on vanilla implementation of attention? we do see speed up on other language models using flash-attention or xformers. Any insights will be helpful, thanks!!
|
closed
|
2023-09-28T22:39:24Z
|
2023-10-01T15:42:59Z
|
https://github.com/lucidrains/vit-pytorch/issues/281
|
[] |
roywei
| 1
|
benbusby/whoogle-search
|
flask
| 417
|
[BUG] First result not shown sometimes
|
**Describe the bug**
When making a search Whoogle sometimes omits the first result.
**To Reproduce**
Search something like "Google" and you will see the second result on top. This only happens on some searches but not others. It will happen every time on certain search terms
I was not able to reproduce this using the public instances. I would rather keep my search engine URL omitted but if needed I can PM you it for testing.
**Deployment Method**
DigitalOcean App via Docker
**Version of Whoogle Search**
Latest build
**Desktop (please complete the following information):**
- OS: Wndows
- Browser: Brave
- Version: V1.29.76
**Additional context**
https://imgur.com/a/kOEZA24
https://imgur.com/a/RwpnodJ
|
closed
|
2021-09-05T20:13:38Z
|
2021-10-27T21:15:17Z
|
https://github.com/benbusby/whoogle-search/issues/417
|
[
"bug"
] |
Mazawrath
| 5
|
pytest-dev/pytest-django
|
pytest
| 214
|
Order in which tests are executed
|
pytest-django could / should probably use the same test ordering like Django: https://docs.djangoproject.com/en/1.7/topics/testing/overview/#order-in-which-tests-are-executed
This would mean to run all tests using the `db` fixture before tests using the `transactional_db` fixture, and then the remaining ones.
Django's tests should be run according to the Django documentation linked above.
This could be achieved using pytest's `pytest_collection_modifyitems` probably.
Some related pytest plugin: https://github.com/ftobia/pytest-ordering
|
closed
|
2015-02-27T14:30:10Z
|
2019-03-16T02:06:16Z
|
https://github.com/pytest-dev/pytest-django/issues/214
|
[
"enhancement",
"bitesize"
] |
blueyed
| 21
|
JaidedAI/EasyOCR
|
deep-learning
| 358
|
easyocr.recognize is significantly slower when given several boxes to estimate, rather than running it several time with one box each time
|
Hello,
Thank you for this tool, it is great. I want to build on top of it, and execution time is a matter of importance for me (even on CPU).
I don't know if it's a bug or a *feature*, but I've noticed that `easyocr.recognize` is significantly slower when called once and given `n` boxes to estimate the text in, rather than called `n` times with one box each time.
### How to reproduce ###
1/ Download

Run
```python
import easyocr
reader = Reader(['ja'], gpu = False)
image = "path/to/example.png"
result = reader.detect(image)
print(result)
# returns ([[117, 471, -3, 37], [120, 596, 32, 64]], [])
```
Then run
```python
import time
L = result[0]
start_time = time.time()
estimate = reader.recognize(filename,
horizontal_list=L,
free_list=[]
)
print(estimate)
print("--- %s seconds ---" % (time.time() - start_time))
```
and
```python
start_time = time.time()
for box in L:
estimate = reader.recognize(filename,
horizontal_list=[box],
free_list=[]
)
print(estimate)
print("--- %s seconds ---" % (time.time() - start_time))
```
**For me the first one takes ~3.45s, and the second one takes ~2.95s, i.e. about 15% faster.**
Is it an expected behavior?
Many thanks!
|
closed
|
2021-01-28T20:50:10Z
|
2021-02-22T01:04:12Z
|
https://github.com/JaidedAI/EasyOCR/issues/358
|
[] |
fxmarty
| 4
|
dynaconf/dynaconf
|
django
| 1,102
|
How to configure layered environments on files when confg.py is used.
|
The following section of the documentation https://www.dynaconf.com/settings_files/#layered-environments-on-files misses the example for `settings.py` (`py`-based settings). Can someone help me to understand how the configuration should be properly made? For example, I have the following config files:
```python
# settings.py
DATABASE = {
"HOST": "localhost",
"PORT": 3306,
"USERNAME": "base_user",
}
DEBUG = True
```
```python
# settings_dev.py
DATABASE = {
"USERNAME": "dev_user",
"PASSWORD": "dev_password",
}
DEBUG = True
```
```python
# settings_prod.py
DATABASE = {
"HOST": "prod-db-server",
"USERNAME": "prod_user",
"PASSWORD": "prod_password",
}
DEBUG = False
```
And I initialized my configuration using the following piece of code:
```python
from dynaconf import Dynaconf
def main():
settings = Dynaconf(
settings_files=["settings.py", "settings_dev.py", "settings_prod.py"],
environments=True,
merge_enabled=True,
env="dev" # dev is explicitly set
)
print("Database Host:", settings.DATABASE["host"])
print("Database Port:", settings.DATABASE["port"])
print("Database User:", settings.DATABASE["username"])
print("Database Password:", settings.DATABASE["password"])
print("Debug Mode:", settings.DEBUG)
if __name__ == "__main__":
main()
```
When I run this example, I get the following output:
```
Database Host: prod-db-server
Database Port: 3306
Database User: prod_user
Database Password: prod_password
Debug Mode: False
```
The values from `settings_prod.py` are returned. The only workaround I currently found is to skip the `env="dev"` configuration and use only two setting files (default and for target env) `settings_files=["settings.py", "settings_dev.py"]`.
Is there any other solution to make it work as expected with `*.py` config files? I have to use Python configuration because of this: https://github.com/dynaconf/dynaconf/issues/336.
|
open
|
2024-06-02T21:14:37Z
|
2024-07-08T18:37:56Z
|
https://github.com/dynaconf/dynaconf/issues/1102
|
[
"question",
"Docs",
"django"
] |
oleksii-suprun
| 4
|
modin-project/modin
|
pandas
| 6,754
|
Merge partial dtype caches on `concat(axis=0)`
|
we could have merged 'known_dtypes':
```python
import modin.pandas as pd
import numpy as np
from modin.core.dataframe.pandas.metadata import ModinDtypes, DtypesDescriptor
df1 = pd.DataFrame({"a": [1, 2, 3], "b": [3, 4, 5]})
df2 = pd.DataFrame({"a": [3.0, 4.0, 5.4], "b": [True, True, False]})
df2._query_compiler._modin_frame.set_dtypes_cache(
ModinDtypes(
DtypesDescriptor({"a": np.dtype(float)}, cols_with_unknown_dtypes=["b"])
)
)
res = pd.concat([df1, df2])
# known dtypes: {};
# cols with unknown dtypes: ['a', 'b'];
print(res._query_compiler._modin_frame._dtypes)
# Expected:
# known_dtypes: {"a": float}
# cols_with_unknown_dtypes: ["b"]
```
|
closed
|
2023-11-17T16:14:00Z
|
2023-11-21T13:18:32Z
|
https://github.com/modin-project/modin/issues/6754
|
[
"Performance 🚀",
"P2"
] |
dchigarev
| 0
|
idealo/image-super-resolution
|
computer-vision
| 165
|
visible padding border when run RDN artifact-cancelling net with "by_patch_size"
|

[pic]
just check this out
code:
```
rdn = RDN(weights='noise-cancel')
sr_img = rdn.predict(lr_img, by_patch_of_size=10)
```
using different "by_path_size" produces different sizes of blocks.
SAD
Tested on Ubuntu 18
|
closed
|
2020-12-23T14:14:45Z
|
2021-01-08T10:58:55Z
|
https://github.com/idealo/image-super-resolution/issues/165
|
[] |
DeXtmL
| 1
|
mlflow/mlflow
|
machine-learning
| 14,153
|
[FR] add asynchronous option to client log_artifact
|
### Willingness to contribute
Yes. I would be willing to contribute this feature with guidance from the MLflow community.
### Proposal Summary
Most MlflowClient APIs have a `synchronous` bool kwarg, but log_artifact does not. The proposal is to add the option to MlflowClient.log_artifact
### Motivation
> #### What is the use case for this feature?
Asynchronous logging of artifacts.
> #### Why is this use case valuable to support for MLflow users in general?
Same reason async logging of anything is valuable.
> #### Why is it currently difficult to achieve this use case?
APIs don't expose this.
### Details
log_image already does async logging of artifacts internally so this doesn't seem like it should be hard.
### What component(s) does this bug affect?
- [X] `area/artifacts`: Artifact stores and artifact logging
- [ ] `area/build`: Build and test infrastructure for MLflow
- [ ] `area/deployments`: MLflow Deployments client APIs, server, and third-party Deployments integrations
- [ ] `area/docs`: MLflow documentation pages
- [ ] `area/examples`: Example code
- [ ] `area/model-registry`: Model Registry service, APIs, and the fluent client calls for Model Registry
- [ ] `area/models`: MLmodel format, model serialization/deserialization, flavors
- [ ] `area/recipes`: Recipes, Recipe APIs, Recipe configs, Recipe Templates
- [ ] `area/projects`: MLproject format, project running backends
- [ ] `area/scoring`: MLflow Model server, model deployment tools, Spark UDFs
- [ ] `area/server-infra`: MLflow Tracking server backend
- [X] `area/tracking`: Tracking Service, tracking client APIs, autologging
### What interface(s) does this bug affect?
- [ ] `area/uiux`: Front-end, user experience, plotting, JavaScript, JavaScript dev server
- [ ] `area/docker`: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
- [ ] `area/sqlalchemy`: Use of SQLAlchemy in the Tracking Service or Model Registry
- [ ] `area/windows`: Windows support
### What language(s) does this bug affect?
- [ ] `language/r`: R APIs and clients
- [ ] `language/java`: Java APIs and clients
- [ ] `language/new`: Proposals for new client languages
### What integration(s) does this bug affect?
- [ ] `integrations/azure`: Azure and Azure ML integrations
- [ ] `integrations/sagemaker`: SageMaker integrations
- [ ] `integrations/databricks`: Databricks integrations
|
open
|
2024-12-23T23:19:25Z
|
2025-01-20T05:56:49Z
|
https://github.com/mlflow/mlflow/issues/14153
|
[
"enhancement",
"area/artifacts",
"area/tracking"
] |
garymm
| 10
|
skypilot-org/skypilot
|
data-science
| 4,659
|
[Docs] Adding new clouds (DO, vast) to readme/docs
|
Like in #4573, we should add those new clouds to the doc.
Also we might need to update the cloud figure.
|
open
|
2025-02-06T21:09:41Z
|
2025-02-06T22:18:37Z
|
https://github.com/skypilot-org/skypilot/issues/4659
|
[] |
cblmemo
| 2
|
tensorflow/tensor2tensor
|
deep-learning
| 1,610
|
Serving problem: Error parsing text-format tensorflow.SavedModel
|
I trained a transformer model about 3 months ago. Then I export and serve my model based on the tutorial. Everything was fine.
Now I want to train a new model. I can train a new model but during serving a strange error happened:
```
Error parsing text-format tensorflow.SavedModel: 134169:18: Message type "tensorflow.FunctionDef" has no field named "arg_attr".
SavedModel load for tags { serve }; Status: fail. Took 102019 microseconds.
Loading servable: {name: 1561270023 version: 1561270023} failed: Data loss: Can't parse /content/output_model/export/1561270023/saved_model.pbtxt as text proto
```
You can see my code in [this notebook](https://colab.research.google.com/drive/17KnfYcx3VNvdJ4Rtj1lSL8msmLqTazPC). I also add a **serving** section to show you what happens if I serve my model using `tensorflow_model_server`. I would appreciate it if you could help me figure this out.
|
open
|
2019-06-23T06:59:10Z
|
2019-08-12T11:43:55Z
|
https://github.com/tensorflow/tensor2tensor/issues/1610
|
[] |
zfallahnejad
| 2
|
mirumee/ariadne
|
graphql
| 251
|
Add schema validation to make_executable_schema
|
GraphQL-Core-next implements `assert_valid_schema` function in `graphql.type.validate` that takes `GraphQLSchema` and validates its correctness.
Currently, this function is called as part of query execution, during query validation, but we could also run it inside the `make_executable_schema`, so developers get this error on application's initialization.
|
closed
|
2019-10-02T10:08:45Z
|
2019-10-13T23:41:25Z
|
https://github.com/mirumee/ariadne/issues/251
|
[
"enhancement",
"help wanted",
"roadmap"
] |
rafalp
| 6
|
twopirllc/pandas-ta
|
pandas
| 158
|
Question: just want to know why drawdown doesn't in the default strategy
|
closed
|
2020-11-09T13:32:17Z
|
2020-11-12T23:01:41Z
|
https://github.com/twopirllc/pandas-ta/issues/158
|
[
"info"
] |
tangxianrong
| 1
|
|
microsoft/hummingbird
|
scikit-learn
| 3
|
upgrade and test new sklearn version
|
we need to upgrade sklearn version. (currently scikit-learn==0.21.3). To do this, we need to accomodate some API changes in the newer version
ex: Imputer is deprecated (was in preprocessing). Now, do :from sklearn.impute import SimpleImputer
|
closed
|
2020-03-23T17:04:48Z
|
2020-06-11T23:41:52Z
|
https://github.com/microsoft/hummingbird/issues/3
|
[] |
ksaur
| 0
|
Lightning-AI/LitServe
|
fastapi
| 26
|
504 gateway timeouts
|
Hi guys,
using one GPU works better than working with 4 GUPs e.g.
I am running it on LightningAI and not even one request goes through at all if running on 4 devices.
`server = LitServer(SimpleLitAPI(), accelerator="cuda", devices=1, timeout=60)`
vs
`server = LitServer(SimpleLitAPI(), accelerator="cuda", devices=4, timeout=60)`
Any hint?
|
closed
|
2024-04-10T20:12:52Z
|
2024-04-11T17:45:44Z
|
https://github.com/Lightning-AI/LitServe/issues/26
|
[] |
grumpyp
| 7
|
litestar-org/polyfactory
|
pydantic
| 514
|
Bug: Invalid Coverage for Optional Fields with Annotated Constraint
|
### Description
I was trying to produce coverage for a Pydantic model with Annotated Field constraints.
```py
class PartialA(BaseModel):
a: Annotated[str | None, Field(min_length=1, max_length=10)] = None
```
The coverage function does not yield proper attributes for field `a`.
I tracked it down to the method `get_field_value_coverage` in `BaseFactory` which should extract the proper constraints.
Hope the issue is clear, happy to give further clarifications.
### URL to code causing the issue
_No response_
### MCVE
- This is a test that I wrote which can be used to test my issue
```python
from pydantic import BaseModel, Field
from typing import Annotated
from polyfactory.factories.pydantic_factory import ModelFactory
from polyfactory.pytest_plugin import register_fixture
class A(BaseModel):
a: Annotated[str, Field(min_length=1, max_length=10)]
class PartialA(BaseModel):
a: Annotated[str | None, Field(min_length=1, max_length=10)] = None
class PartialB(BaseModel):
a: str | None = None
class PartialC(BaseModel):
a: Annotated[int | None, Field(ge=0, le=10)] = None
@register_fixture
class ASchemaFactory(ModelFactory[A]):
__model__ = A
@register_fixture
class PartialASchemaFactory(ModelFactory[PartialA]):
__model__ = PartialA
@register_fixture
class PartialBSchemaFactory(ModelFactory[PartialB]):
__model__ = PartialB
@register_fixture
class PartialCSchemaFactory(ModelFactory[PartialC]):
__model__ = PartialC
def test_a_schema_factory(
a_schema_factory: ASchemaFactory):
for spec in a_schema_factory.coverage():
pass
def test_partial_a_schema_factory(
partial_a_schema_factory: PartialASchemaFactory):
for spec in partial_a_schema_factory.coverage():
pass
def test_partial_b_schema_factory(
partial_b_schema_factory: PartialBSchemaFactory):
for spec in partial_b_schema_factory.coverage():
pass
def test_partial_c_schema_factory(
partial_c_schema_factory: PartialCSchemaFactory):
for spec in partial_c_schema_factory.coverage():
pass
```
### Steps to reproduce
_No response_
### Screenshots
_No response_
### Logs
```bash
==================================================================== test session starts =====================================================================
platform linux -- Python 3.12.2, pytest-7.4.3, pluggy-1.3.0
rootdir: /home/rr/work/oss/polyfactory
configfile: pyproject.toml
plugins: cov-4.1.0, hypothesis-6.92.1, Faker-21.0.0, asyncio-0.23.2
asyncio: mode=Mode.AUTO
collected 4 items
tests/test_optional_constraint_coverage_factory.py .F.F
========================================================================== FAILURES ==========================================================================
_______________________________________________________________ test_partial_a_schema_factory ________________________________________________________________
partial_a_schema_factory = <class 'tests.test_optional_constraint_coverage_factory.PartialASchemaFactory'>
def test_partial_a_schema_factory(
partial_a_schema_factory: PartialASchemaFactory):
> for spec in partial_a_schema_factory.coverage():
tests/test_optional_constraint_coverage_factory.py:44:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cls = <class 'tests.test_optional_constraint_coverage_factory.PartialASchemaFactory'>, kwargs = {}, data = {'a': 'GHTBnvXwdvDGBsyHYejs'}
@classmethod
def coverage(cls, **kwargs: Any) -> abc.Iterator[T]:
"""Build a batch of the factory's Meta.model will full coverage of the sub-types of the model.
:param kwargs: Any kwargs. If field_meta names are set in kwargs, their values will be used.
:returns: A iterator of instances of type T.
"""
for data in cls.process_kwargs_coverage(**kwargs):
> instance = cls.__model__(**data)
E pydantic_core._pydantic_core.ValidationError: 1 validation error for PartialA
E a
E String should have at most 10 characters [type=string_too_long, input_value='GHTBnvXwdvDGBsyHYejs', input_type=str]
E For further information visit https://errors.pydantic.dev/2.5/v/string_too_long
polyfactory/factories/base.py:1058: ValidationError
_______________________________________________________________ test_partial_c_schema_factory ________________________________________________________________
partial_c_schema_factory = <class 'tests.test_optional_constraint_coverage_factory.PartialCSchemaFactory'>
def test_partial_c_schema_factory(
partial_c_schema_factory: PartialCSchemaFactory):
> for spec in partial_c_schema_factory.coverage():
tests/test_optional_constraint_coverage_factory.py:54:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cls = <class 'tests.test_optional_constraint_coverage_factory.PartialCSchemaFactory'>, kwargs = {}, data = {'a': 7693}
@classmethod
def coverage(cls, **kwargs: Any) -> abc.Iterator[T]:
"""Build a batch of the factory's Meta.model will full coverage of the sub-types of the model.
:param kwargs: Any kwargs. If field_meta names are set in kwargs, their values will be used.
:returns: A iterator of instances of type T.
"""
for data in cls.process_kwargs_coverage(**kwargs):
> instance = cls.__model__(**data)
E pydantic_core._pydantic_core.ValidationError: 1 validation error for PartialC
E a
E Input should be less than or equal to 10 [type=less_than_equal, input_value=7693, input_type=int]
E For further information visit https://errors.pydantic.dev/2.5/v/less_than_equal
polyfactory/factories/base.py:1058: ValidationError
====================================================================== warnings summary ======================================================================
.venv/lib/python3.12/site-packages/beanie/odm/fields.py:581
/home/rr/work/oss/polyfactory/.venv/lib/python3.12/site-packages/beanie/odm/fields.py:581: DeprecationWarning: `general_plain_validator_function` is deprecated, use `with_info_plain_validator_function` instead.
return core_schema.general_plain_validator_function(validate)
.venv/lib/python3.12/site-packages/pydantic_core/core_schema.py:3902
.venv/lib/python3.12/site-packages/pydantic_core/core_schema.py:3902
.venv/lib/python3.12/site-packages/pydantic_core/core_schema.py:3902
/home/rr/work/oss/polyfactory/.venv/lib/python3.12/site-packages/pydantic_core/core_schema.py:3902: DeprecationWarning: `general_plain_validator_function` is deprecated, use `with_info_plain_validator_function` instead.
warnings.warn(
.venv/lib/python3.12/site-packages/beanie/odm/fields.py:150
.venv/lib/python3.12/site-packages/beanie/odm/fields.py:150
/home/rr/work/oss/polyfactory/.venv/lib/python3.12/site-packages/beanie/odm/fields.py:150: DeprecationWarning: `general_plain_validator_function` is deprecated, use `with_info_plain_validator_function` instead.
python_schema=core_schema.general_plain_validator_function(
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
================================================================== short test summary info ===================================================================
FAILED tests/test_optional_constraint_coverage_factory.py::test_partial_a_schema_factory - pydantic_core._pydantic_core.ValidationError: 1 validation error for PartialA
FAILED tests/test_optional_constraint_coverage_factory.py::test_partial_c_schema_factory - pydantic_core._pydantic_core.ValidationError: 1 validation error for PartialC
========================================================== 2 failed, 2 passed, 6 warnings in 0.46s ===========================================================
```
### Release Version
2.15.0
### Platform
- [X] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above)
|
open
|
2024-03-28T02:25:41Z
|
2025-03-20T15:53:15Z
|
https://github.com/litestar-org/polyfactory/issues/514
|
[
"bug"
] |
tharindurr
| 0
|
JaidedAI/EasyOCR
|
deep-learning
| 535
|
memory keep increases (CPU)
|
I'm running this
```
def reader_text(thread_limiter,reader):
thread_limiter..acquire()
try:
results = reader.readtext(output)
----doing something----
finally:
limit.release()
return
threads=[]
reader = easyocr.Reader(['en'], gpu=False)
thread_limiter= threading.Semaphore(3)
for i in range(sometimes):
x = threading.Thread(target=reader_text,
args=(thread_limiter,reader))
threads.append(x)
x.start()
for thread in threads:
thread.join()
```
my memory keeps on increasing didn't find any clue
|
closed
|
2021-09-09T11:05:17Z
|
2022-03-02T09:25:33Z
|
https://github.com/JaidedAI/EasyOCR/issues/535
|
[] |
BalajiArun004
| 2
|
kymatio/kymatio
|
numpy
| 894
|
Looks good but we're now missing docstring for `N`, `J` in `scattering_filter_factory`.
|
Looks good but we're now missing docstring for `N`, `J` in `scattering_filter_factory`.
_Originally posted by @janden in https://github.com/kymatio/kymatio/pull/863#pullrequestreview-1012149453_
So these docstrings need to be added.
|
closed
|
2022-06-20T11:53:18Z
|
2023-03-03T07:58:41Z
|
https://github.com/kymatio/kymatio/issues/894
|
[
"doc"
] |
janden
| 0
|
twelvedata/twelvedata-python
|
matplotlib
| 88
|
[Bug]: Setuptools not installed as a requirement, but module won't load without it.
|
```
from pkg_resources import get_distribution, DistributionNotFound
ModuleNotFoundError: No module named 'pkg_resources'
```
twelvedata python package depends on `setuptools` being installed to access pkg_resources, but does not define it as a dependency, so if setuptools is not installed due to an incidental requirement of another package, the module fails to load.
Would be helpful if setuptools was installed when twelvedata is installed, or preferably removing use of pkg_resources which is being deprecated in favor of `importlib.metadata.get_version("twelvedata")` to fetch the version.
|
closed
|
2024-08-29T12:18:03Z
|
2024-08-30T08:28:38Z
|
https://github.com/twelvedata/twelvedata-python/issues/88
|
[] |
EdgyEdgemond
| 1
|
akfamily/akshare
|
data-science
| 5,179
|
AKShare 接口问题报告
|
stock_zh_a_hist这个接口无法获取沪深300的数据,这是一个指数数据,现在的接口访问000300会直接报错,腾讯的接口可以通过sh000300获取到沪深300数据,但是每天更新的很慢
stock_zh_a_hist_tx_df = ak.stock_zh_a_hist_tx(symbol="sh000300", start_date="20240911",
end_date="20240911",
adjust="qfq")
stock_zh_a_hist_df = ak.stock_zh_a_hist(symbol="000300", period="daily", start_date="20240911", end_date="20240911",
adjust="qfq")




希望可以新增https://quote.eastmoney.com/zs000300.html东财的指数历史股票数据接口
|
closed
|
2024-09-12T06:58:14Z
|
2024-09-12T07:33:55Z
|
https://github.com/akfamily/akshare/issues/5179
|
[
"bug"
] |
HolyLl
| 2
|
nltk/nltk
|
nlp
| 2,686
|
Travis CI test fails with Python 3.9
|
Travis CI now also tests with Python 3.9.
The output reveals that the XML Parser used by NLTK's corpus.reader is no longer compatible with the newest Python 3.9 version.
So all NLTK builds will fail Travis CI testing with Python 3.9 until this is fixed:
=================================== FAILURES ===================================
___________________________ [doctest] corpus.doctest ___________________________
615
616 semcor
617 ------
618 The Brown Corpus, annotated with WordNet senses.
619
620 >>> from nltk.corpus import semcor
621 >>> semcor.words('brown2/tagfiles/br-n12.xml') # doctest: +ELLIPSIS
622 ['When', 'several', 'minutes', 'had', 'passed', ...]
623 >>> sent = semcor.xml('brown2/tagfiles/br-n12.xml').findall('context/p/s')[0]
624 >>> for wordform in sent.getchildren():
UNEXPECTED EXCEPTION: AttributeError("'xml.etree.ElementTree.Element' object has no attribute 'getchildren'")
Traceback (most recent call last):
File "/opt/python/3.9.1/lib/python3.9/doctest.py", line 1336, in __run
exec(compile(example.source, filename, "single",
File "<doctest corpus.doctest[135]>", line 1, in <module>
AttributeError: 'xml.etree.ElementTree.Element' object has no attribute 'getchildren'
/home/travis/build/nltk/nltk/nltk/test/corpus.doctest[0m:624: UnexpectedException
920 rte
921 ---
922 The RTE (Recognizing Textual Entailment) corpus was derived from the
923 RTE1, RTE2 and RTE3 datasets (dev and test data), and consists of a
924 list of XML-formatted 'text'/'hypothesis' pairs.
925
926 >>> from nltk.corpus import rte
927 >>> print(rte.fileids()) # doctest: +ELLIPSIS
928 ['rte1_dev.xml', 'rte1_test.xml', 'rte2_dev.xml', ..., 'rte3_test.xml']
929 >>> rtepairs = rte.pairs(['rte2_test.xml', 'rte3_test.xml'])
UNEXPECTED EXCEPTION: AttributeError("'xml.etree.ElementTree.Element' object has no attribute 'getiterator'")
|
closed
|
2021-04-04T08:00:45Z
|
2021-04-04T20:12:34Z
|
https://github.com/nltk/nltk/issues/2686
|
[] |
ekaf
| 1
|
remsky/Kokoro-FastAPI
|
fastapi
| 115
|
Support more than 1 stream at the same time.
|
I noticed that when Kokoro is running, it does not use all of the GPU. However, the latency gets very bad if I send two requests simultaneously. Is there a way to optimize it to support more than one stream simultaneously?
P.S. This is a fantastic project! How do we give back? I don't see a donation link on the readme.
|
open
|
2025-02-03T13:12:56Z
|
2025-03-10T04:21:11Z
|
https://github.com/remsky/Kokoro-FastAPI/issues/115
|
[
"in-progress"
] |
sipvoip
| 16
|
widgetti/solara
|
flask
| 868
|
Solara Dev Documentation is Buggy
|
**Issue:**
When I go to [solara docs](https://solara.dev/documentation/), I cannot immediately scroll on the web page. I see the left side panel open and the content, but I can not scroll. Sometimes, when the page is loading, I noticed that I could scroll, but then a quick "flash" of a grey popup shows and disappears, and afterwards I cannot scroll again.
However, whenever I click on the content itself, the sidebar collapses (with no clear way to open again), and the page becomes scrollable.
**Ideal State:**
There are a few adjustments that need to be made:
- Whenever the doc pages first load (the issue seems to affect all doc pages), the page should be scrollable, even after the page finished loading.
- Whenever I click on the content and the left sidebar collapses, there should be a button that can open the sidebar back up again.
- Some pages seem to extend too far to the right, and there's no way to scroll horizontally. So the content is cutoff. Each page should be properly contained within the page size.
|
open
|
2024-11-21T16:03:56Z
|
2024-11-22T09:47:55Z
|
https://github.com/widgetti/solara/issues/868
|
[
"documentation"
] |
jonkimdev
| 1
|
OthersideAI/self-operating-computer
|
automation
| 196
|
Use gpt-4o instead of using gpt-4 turbo
|
The system by default is usng gpt4-turbo, is it possible to use gpt-4o instead, which is suppose to be better and less expensive?
thanks
|
closed
|
2024-06-16T07:58:46Z
|
2024-07-10T21:51:06Z
|
https://github.com/OthersideAI/self-operating-computer/issues/196
|
[
"enhancement"
] |
aicoder2048
| 1
|
tflearn/tflearn
|
tensorflow
| 403
|
openai gym render and tflearn, can't load any more object with static TLS
|
when using tflearn and rendering game in openAI gym, it gives me an error,
Unexpected error loading library libGL.so.1: dlopen: cannot load any more object with static TLS
|
closed
|
2016-10-18T00:16:57Z
|
2016-11-19T20:57:12Z
|
https://github.com/tflearn/tflearn/issues/403
|
[] |
gabrieledcjr
| 7
|
mwaskom/seaborn
|
data-visualization
| 3,380
|
Incorrect error bar offset in barplots
|
In whitegrid style, I've noticed the error bars in seaborn are plotted offset to the true end of the colored bar when using barplots (see images). This is caused by the default style parameter `'patch.edgecolor': 'w'` which makes a invisible border that the error bar is truely centered on (see offset.png). I like the white border but it would be great if the error bars would be centered on the true bar and not the white border particularly in whitegrid style.
To reproduce:
```
import numpy as np
import seaborn as sns
import random
import matplotlib.pyplot as plt
# Data simulation
rng = np.random.RandomState(0)
variable = rng.normal(20, 1, size = 50)
random.seed(0)
group = random.choices(["G1", "G2", "G3"], k = 50)
df = {'variable': variable, 'group': group}
sns.set_style("whitegrid", {"patch.edgecolor": "none"})
# Correct error bars no whitespace
sns.barplot(x = group, y = variable, errorbar="se", capsize = 0.1, errwidth=1.5)
fname = "test_errorbar_correct.pdf"
plt.savefig(fname, bbox_inches="tight")
# Offset error bars no whitespace
sns.set_style("whitegrid")
sns.barplot(x = group, y = variable, errorbar="se", capsize = 0.1, errwidth=1.5)
fname = "test_errorbar_offset.pdf"
plt.savefig(fname, bbox_inches="tight")
```
Correct

Offset

|
closed
|
2023-06-06T19:20:09Z
|
2023-06-07T01:12:12Z
|
https://github.com/mwaskom/seaborn/issues/3380
|
[] |
schmittlema
| 3
|
pytorch/vision
|
machine-learning
| 8,358
|
Choose either 'long' or 'short' options for the resize anchor edge if the size variable is scalar
|
### 🚀 The feature
Choose either 'long' or 'short' options for the resize anchor edge if the size variable is scalar
### Motivation, pitch
`torchvision.transforms.Resize()` does not provide a clean interface for resizing images based off the longer edge.
Consider the following use case - a user wants to resize a set of images such that the dimensions are constrained by `size`, e.g. the longer edge of the images is always equal to `size`. Consider two images of size `[1000, 500]` and `[500, 1000]`. We want to resize both such that the maximum dimension is 500, e.g. resize the first image to `[500, 250]`and the second to `[250, 500]`.
The naive method approach would be to set `size = 500`. As noted in the docs,
> If size is an int, smaller edge of the image will be matched to this number.
But in both our cases, the smaller edge of the image is already 500 so this essentially does nothing.
Setting `max_size = 500` also doesn't solve the issue since the current implementation specifically doesn't allow `max_size == size` in the code. While we could select a value for `size` that is less than `max_size`, there's no clear way to pick a value of `size` that would result in the desired effect.
Right now there's no clean way to resize images based solely off the size of the longer edge. Adding the ability to pick the resize anchor edge would allow this.
### Alternatives
_No response_
### Additional context
A similar comment was made in [#2868](https://github.com/pytorch/vision/issues/2868), but it seems like the discussion about the longer edge was lost in the final implementation
|
closed
|
2024-03-28T00:00:06Z
|
2024-06-05T11:49:29Z
|
https://github.com/pytorch/vision/issues/8358
|
[] |
sidijju
| 8
|
graphql-python/flask-graphql
|
graphql
| 56
|
Incorrect request populated as context for mutation
|
I posted about this on stackoverflow, but figured I'd ask about it directly here as well: https://stackoverflow.com/questions/53233291/python-flask-and-graphene-incorrect-request-causes-security-issue
Basically, the issue is that when I try to perform a high volume of mutations as one user while another user is making requests as well, some number of those mutations are made as the wrong user.
The issue seems to go away when I run with `gunicorn` instead of `FLASK_ENV=production flask run`
I know the context is populated here: https://github.com/graphql-python/flask-graphql/blob/master/flask_graphql/graphqlview.py but the context for the query is incorrect in this case.
I was wondering if anyone had seen this issue before, or could point me in the right direction so I can figure out what's wrong.
|
closed
|
2018-11-09T21:24:21Z
|
2019-12-30T22:28:49Z
|
https://github.com/graphql-python/flask-graphql/issues/56
|
[] |
maxlang
| 0
|
jazzband/django-oauth-toolkit
|
django
| 617
|
Python 3.4 or Python 3.5
|
The [README for 1.2.0](https://github.com/jazzband/django-oauth-toolkit/blob/86e8f9b22c9d5957b8ff6097208de69758a3c013/README.rst) states that the requirement is Python 3.5+.
However, the [Changelog](https://github.com/jazzband/django-oauth-toolkit/blob/master/CHANGELOG.md) states that the minimum is Python 3.4+.
|
closed
|
2018-07-06T20:02:01Z
|
2018-07-21T15:30:05Z
|
https://github.com/jazzband/django-oauth-toolkit/issues/617
|
[] |
robrap
| 3
|
healthchecks/healthchecks
|
django
| 187
|
hc/settings.py template -> resolve from env variables
|
Is possible to do in hc/settings.py something like this ?
```
import os
...
if os.environ.get("DB_TYPE") == "mysql" or os.environ.get("DB_TYPE") == "postgres" :
DATABASES = {
'default': {
'ENGINE': 'django.db.backends' + os.environ['DB_TYPE'],
'HOST': os.environ['DB_HOST'],
'PORT': os.environ['DB_PORT'],
'NAME': os.environ['DB_NAME'],
'USER': os.environ['DB_USER'],
'PASSWORD': os.environ['DB_PASSWORD'],
'TEST': {'CHARSET': 'UTF8'}
}
}
SLACK_CLIENT_ID = repr(os.environ.get('SLACK_CLIENT_ID','default'))
REGISTRATION_OPEN = os.environ.get("REGISTRATION_OPEN",'True') == "True"
PUSHOVER_EMERGENCY_EXPIRATION = os.environ.get('PUSHOVER_EMERGENCY_EXPIRATION','13370')
...
```
It would be a great if this can be a resolved from env variables.
It will not break a default behaviour if You set a default values.
Thanks
|
closed
|
2018-09-19T12:41:08Z
|
2018-10-22T14:27:17Z
|
https://github.com/healthchecks/healthchecks/issues/187
|
[] |
lukasmrtvy
| 1
|
JoeanAmier/TikTokDownloader
|
api
| 174
|
tiktok 下载时出现——响应内容为空,可能是接口失效或者 Cookie 失效,请尝试更新 Cookie
|
感谢作者开源的工具,使用遇到以下问题:
在键入1. 复制粘贴写入 Cookie
后出现
当前 Cookie 已登录
保存配置成功!
写入 Cookie 成功!
但是键入4. 终端交互模式出现
缓存数据文件不存在
然后再键入3. 批量下载链接作品(通用)
请输入作品链接: https://www.tiktok.com/@vueltaalmundoenmoto/video/7332886806339357984
出现问题:
响应内容为空,可能是接口失效或者 Cookie 失效,请尝试更新 Cookie
获取作品数据失败
正在尝试第 1 次重试
响应内容为空,可能是接口失效或者 Cookie 失效,请尝试更新 Cookie
获取作品数据失败
正在尝试第 2 次重试
|
open
|
2024-03-14T08:42:14Z
|
2024-03-17T13:26:51Z
|
https://github.com/JoeanAmier/TikTokDownloader/issues/174
|
[
"功能异常(bug)"
] |
chlinfeng1997
| 5
|
simple-login/app
|
flask
| 2,315
|
SOLVED: On subdomains there is no creation of unique part before the @
|
## Bug report
**Describe the bug**
I decided to create an email domain. I've set it up to have the main domain pointing to protonmail.
emails would be @domain.xyz
Then I decided to add aliases to my setup. So I created a subdomain. "aliases.domain.xyz"
When I am creating an alias in the simplelogin extension or protonpass I am typically offered a new alias like
website . "5 letters or numbers" @ domain.xyz
this is not the case when using a subdomain.
With the subdomain I am only offered
website@subdomain.domain.xyz
**Expected behavior**
I would expect the extentions to also create the random gibberish part in front of the @-sign. This is clearly not the case.
**Screenshots**
If applicable, add screenshots to help explain your problem.
** Environment **
this happens in the protonpass app as well as the simplelogin Chrome extension.
I assume the code hasn't been tested on using subdomains for that purpose.
My intention is to have the mailbox being hosted by protonmail with the main domain and add simplelogins aliases with a subdomain.
SOLUTION:
My bad, there is a checkbox in the custom domain section that I overlooked,
It is working fine
|
closed
|
2024-11-11T07:57:07Z
|
2024-11-11T08:02:20Z
|
https://github.com/simple-login/app/issues/2315
|
[] |
pyjoku
| 0
|
ageitgey/face_recognition
|
machine-learning
| 1,270
|
Using a already loaded image with the recognition functions?
|
* face_recognition version: 1.3.0
* Python version: 3.8
* Operating System: Windows 10
### Description
I am trying to crop half the image out before running face recognition (The face is always in the top half) but I am unable to find a way to do this without saving the image after running the crop as there seems to be no way to pass a image to face_recognition with using face_recognition.load_image_file.
### What I Did
```
I am a moron, wasnt calling the right variable, ignore me XD
```
|
closed
|
2021-01-25T13:18:36Z
|
2021-01-30T17:20:20Z
|
https://github.com/ageitgey/face_recognition/issues/1270
|
[] |
TristanBeer
| 2
|
thtrieu/darkflow
|
tensorflow
| 456
|
Change labels?
|
I want to change the labels of the object. For example change the "person" label to "dog", but I can't seem to get it working.
Any help ?
|
closed
|
2017-12-04T10:55:42Z
|
2017-12-04T12:08:04Z
|
https://github.com/thtrieu/darkflow/issues/456
|
[] |
dankm8
| 2
|
mljar/mljar-supervised
|
scikit-learn
| 34
|
provide labels for true classes
|
When working with imbalanced datasets, a class may be underrepresented to the point where y_true and y_pred nearly always contain a different number of classes (for example, one class is missing from the predicted values). Because of this, mljar oftentimes cannot be used for imbalanced datasets.
I have attached the error below:
```
MLJAR AutoML: 0%| | 0/80 [00:00<?, ?model/s]Traceback (most recent call last):
...
File "/home/shoe/.virtualenvs/2ravens/lib/python3.6/site-packages/supervised/automl.py", line 256, in fit
self.not_so_random_step(X, y)
File "/home/shoe/.virtualenvs/2ravens/lib/python3.6/site-packages/supervised/automl.py", line 207, in not_so_random_step
m = self.train_model(params, X, y)
File "/home/shoe/.virtualenvs/2ravens/lib/python3.6/site-packages/supervised/automl.py", line 164, in train_model
il.train({"train": {"X": X, "y": y}})
File "/home/shoe/.virtualenvs/2ravens/lib/python3.6/site-packages/supervised/iterative_learner_framework.py", line 75, in train
self.predictions(learner, train_data, validation_data),
File "/home/shoe/.virtualenvs/2ravens/lib/python3.6/site-packages/supervised/callbacks/callback_list.py", line 23, in on_iteration_end
cb.on_iteration_end(logs, predictions)
File "/home/shoe/.virtualenvs/2ravens/lib/python3.6/site-packages/supervised/callbacks/early_stopping.py", line 59, in on_iteration_end
predictions.get("y_train_true"), predictions.get("y_train_predicted")
File "/home/shoe/.virtualenvs/2ravens/lib/python3.6/site-packages/supervised/metric.py", line 58, in __call__
return self.metric(y_true, y_predicted)
File "/home/shoe/.virtualenvs/2ravens/lib/python3.6/site-packages/supervised/metric.py", line 24, in logloss
ll = log_loss(y_true, y_predicted)
File "/home/shoe/.virtualenvs/2ravens/lib/python3.6/site-packages/sklearn/metrics/classification.py", line 1809, in log_loss
lb.classes_))
ValueError: y_true and y_pred contain different number of classes 3, 2. Please provide the true labels explicitly through the labels argument. Classes found in y_true: [0 1 2]
```
|
closed
|
2019-11-07T21:06:44Z
|
2019-11-07T21:17:50Z
|
https://github.com/mljar/mljar-supervised/issues/34
|
[] |
Shoeboxam
| 1
|
facebookresearch/fairseq
|
pytorch
| 4,704
|
How can I use fairseq on Apple M1 chip with GPU?
|
## ❓ Questions and Help
I want to use fairseq on Apple M1 chip for BART model. I checked the document and optional arguments but I could not figure out the solution or setting about mps. So I need your help. Please give me some advice, thank you.
#### What's your environment?
- fairseq Version 0.9.0:
- PyTorch Version 1.12.1
- OS MacOS Monterey Version 12.4:
- How you installed fairseq (`pip`):
- Build command you used (if compiling from source):
- Python version: 3.7.13
- GPU models and configuration: Apple M1 Chip
|
open
|
2022-09-08T08:38:21Z
|
2024-10-09T17:01:35Z
|
https://github.com/facebookresearch/fairseq/issues/4704
|
[
"question",
"needs triage"
] |
sataketatsuya
| 5
|
huggingface/datasets
|
pytorch
| 7,037
|
A bug of Dataset.to_json() function
|
### Describe the bug
When using the Dataset.to_json() function, an unexpected error occurs if the parameter is set to lines=False. The stored data should be in the form of a list, but it actually turns into multiple lists, which causes an error when reading the data again.
The reason is that to_json() writes to the file in several segments based on the batch size. This is not a problem when lines=True, but it is incorrect when lines=False, because writing in several times will produce multiple lists(when len(dataset) > batch_size).
### Steps to reproduce the bug
try this code:
```python
from datasets import load_dataset
import json
train_dataset = load_dataset("Anthropic/hh-rlhf", data_dir="harmless-base")["train"]
output_path = "./harmless-base_hftojs.json"
print(len(train_dataset))
train_dataset.to_json(output_path, lines=False, force_ascii=False, indent=2)
with open(output_path, encoding="utf-8") as f:
data = json.loads(f.read())
```
it raise error: json.decoder.JSONDecodeError: Extra data: line 4003 column 1 (char 1373709)
Extra square brackets have appeared here:
<img width="265" alt="image" src="https://github.com/huggingface/datasets/assets/26499566/81492332-386d-42e8-88d1-b6d4ae3682cc">
### Expected behavior
The code runs normally.
### Environment info
datasets=2.20.0
|
open
|
2024-07-10T09:11:22Z
|
2024-09-22T13:16:07Z
|
https://github.com/huggingface/datasets/issues/7037
|
[
"bug"
] |
LinglingGreat
| 2
|
vi3k6i5/flashtext
|
nlp
| 48
|
[bug] set of word boundary characters too restrictive
|
Hello there,
first of all: thanks for the amazing algorithm, it's really useful!
It turns out you use only a very restrictive set of characters as `non_word_boundaries`. For many languages this poses a problem. E.g. in German:
```python
from flashtext import KeywordProcessor
kwp = KeywordProcessor()
kwp.add_keyword("lt.")
kwp.extract_keywords("Damit galt es als so gut wie fix, dass Vueling den Zuschlag erhält.")
# i would expect this to be empty
```
The problem can be fixed (for German) by adjusting the property `non_word_boundaries`:
```python
kwp.non_word_boundaries = kwp.non_word_boundaries.union(list("ÖÄÜöäüß"))
```
Would you consider internationalizing the word boundaries or is this restrictive behavior on purpose?
Thanks,
Alex
|
open
|
2018-03-19T15:41:05Z
|
2018-03-19T16:04:13Z
|
https://github.com/vi3k6i5/flashtext/issues/48
|
[] |
aseifert
| 1
|
allenai/allennlp
|
nlp
| 5,441
|
A problem about install allennlp
|
When i run allennlp in cmd, there was a wrong :
Traceback (most recent call last):
File "/dockerdata/username/anaconda3/bin/allennlp", line 8, in <module>
sys.exit(run())
File "/dockerdata/username/anaconda3/lib/python3.8/site-packages/allennlp/__main__.py", line 40, in run
from allennlp.commands import main # noqa
File "/dockerdata/username/anaconda3/lib/python3.8/site-packages/allennlp/commands/__init__.py", line 23, in <module>
from allennlp.commands.checklist import CheckList
File "/dockerdata/username/anaconda3/lib/python3.8/site-packages/allennlp/commands/checklist.py", line 18, in <module>
from allennlp.confidence_checks.task_checklists.task_suite import TaskSuite
File "/dockerdata/username/anaconda3/lib/python3.8/site-packages/allennlp/confidence_checks/task_checklists/__init__.py", line 1, in <module>
from allennlp.confidence_checks.task_checklists.task_suite import TaskSuite
File "/dockerdata/username/anaconda3/lib/python3.8/site-packages/allennlp/confidence_checks/task_checklists/task_suite.py", line 9, in <module>
from checklist.perturb import Perturb
File "/dockerdata/username/anaconda3/lib/python3.8/site-packages/checklist/perturb.py", line 7, in <module>
from pattern.en import tenses
File "/dockerdata/username/anaconda3/lib/python3.8/site-packages/pattern/text/en/__init__.py", line 61, in <module>
from pattern.text.en.inflect import (
File "/dockerdata/username/anaconda3/lib/python3.8/site-packages/pattern/text/en/__init__.py", line 80, in <module>
from pattern.text.en import wordnet
File "/dockerdata/username/anaconda3/lib/python3.8/site-packages/pattern/text/en/wordnet/__init__.py", line 57, in <module>
nltk.data.find("corpora/" + token)
File "/dockerdata/username/anaconda3/lib/python3.8/site-packages/nltk/data.py", line 557, in find
return find(modified_name, paths)
File "/dockerdata/username/anaconda3/lib/python3.8/site-packages/nltk/data.py", line 544, in find
return ZipFilePathPointer(p, zipentry)
File "/dockerdata/username/anaconda3/lib/python3.8/site-packages/nltk/compat.py", line 41, in _decorator
return init_func(*args, **kwargs)
File "/dockerdata/username/anaconda3/lib/python3.8/site-packages/nltk/data.py", line 396, in __init__
zipfile = OpenOnDemandZipFile(os.path.abspath(zipfile))
File "/dockerdata/username/anaconda3/lib/python3.8/site-packages/nltk/compat.py", line 41, in _decorator
return init_func(*args, **kwargs)
File "/dockerdata/username/anaconda3/lib/python3.8/site-packages/nltk/data.py", line 936, in __init__
zipfile.ZipFile.__init__(self, filename)
File "/dockerdata/username/anaconda3/lib/python3.8/zipfile.py", line 1268, in __init__
self._RealGetContents()
File "/dockerdata/username/anaconda3/lib/python3.8/zipfile.py", line 1335, in _RealGetContents
raise BadZipFile("File is not a zip file")
zipfile.BadZipFile: File is not a zip file
**How can i solve it ? Any help is appreciated !**
|
closed
|
2021-10-21T12:36:35Z
|
2021-10-21T13:16:03Z
|
https://github.com/allenai/allennlp/issues/5441
|
[
"question"
] |
lwgkzl
| 1
|
jumpserver/jumpserver
|
django
| 14,875
|
[Question] JumpServer Kubernetes plugin can't handle container selection unlike Argo CD
|
### Product Version
4.6.0
### Product Edition
- [x] Community Edition
- [ ] Enterprise Edition
- [ ] Enterprise Trial Edition
### Installation Method
- [x] Online Installation (One-click command installation)
- [ ] Offline Package Installation
- [ ] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] Source Code
### Environment Information
AWS EKS
### 🤔 Question Description
## Issue Description
In JumpServer's Kubernetes Management interface, when trying to access a pod with multiple containers, the container names are being passed with a comma (e.g., "dove,istio-proxy"), causing a shell access error.
## Current Behavior vs Expected Behavior
- Current (JumpServer):
- Container selection is passed as: "dove,istio-proxy"
- Error: "error: not found any shell"
- Unable to access either container
- Expected (like Argo CD):
- Can access containers individually
- Successful terminal access to 'dove' container
- Successful terminal access to 'istio-proxy' container
## Request
Could we modify the plugin to handle container selection similar to Argo CD, where each container can be accessed separately rather than passing comma-separated container names?
### Expected Behavior
_No response_
### Additional Information
_No response_
|
closed
|
2025-02-12T09:38:20Z
|
2025-02-20T02:47:12Z
|
https://github.com/jumpserver/jumpserver/issues/14875
|
[
"⏳ Pending feedback",
"🤔 Question",
"📦 z~release:v4.7.0"
] |
isjuye
| 4
|
feature-engine/feature_engine
|
scikit-learn
| 473
|
mypy throws error on latest main
|
**Describe the bug**
I'm following the contribution guidelines, But when I run mypy on the latest main branch, I get type errors.The type errors are not related to the issues im working on.
**To Reproduce**
Steps to reproduce the behavior:
1. Pull the latest main branch.
2. Cd into feature-engine
3. Run mypy feature_engine
4. Error must be visible in the command prompt.
**Expected behavior**
Mypy must not be giving out type errors as per guidelines
**Screenshots**

**Desktop (please complete the following information):**
- OS: Windows 10 Home
- Browser: Google Chrome
- Version: Chrome (102.0.5005.63), Windows 10 (19044.1706)
**Additional context**
None
|
closed
|
2022-06-12T10:38:34Z
|
2022-06-13T15:22:32Z
|
https://github.com/feature-engine/feature_engine/issues/473
|
[] |
SangamSwadiK
| 11
|
albumentations-team/albumentations
|
deep-learning
| 2,405
|
[New feature] Add apply_to_images to GaussianBlur
|
open
|
2025-03-11T01:04:42Z
|
2025-03-11T01:04:52Z
|
https://github.com/albumentations-team/albumentations/issues/2405
|
[
"enhancement",
"good first issue"
] |
ternaus
| 0
|
|
MaartenGr/BERTopic
|
nlp
| 1,671
|
BERTopic n-gram words are not adjacent to each other
|
After setting the ngram_range=(2,2), the trained BERTopic model generates topics with 2-gram phrases such as Topic_1: {"Model Router", "Network Setup", etc}, but the individual words of each 2-gram are not adjacent to each other within the document and they are far away form each other. It seems that the BERTopic model is not considering 2-gram at all. Is there any way to make sure that the individual words in the 2-gram phrases of each topic are not far away from each other within the related documents? I don't want BERTopic considers "Modem Router" as a 2-gram if there is no sentence in the whole document having "Modem" and "Router" words next to each other
|
open
|
2023-12-07T06:59:43Z
|
2023-12-13T15:24:14Z
|
https://github.com/MaartenGr/BERTopic/issues/1671
|
[] |
navidNickaan
| 5
|
google-research/bert
|
tensorflow
| 1,030
|
BERT Large , 512 sequence length - Allocation of X exceeds Y% of system memory.
|
Hi ,
I am trying to run BERT Large model having 512 sequence length on CPU for inference. I have converted checkpoint file from BERT Large to savedModel format which has feature transformation ported to it as well.
However when I do the inference I can see warning message as
**"tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of X exceeds 10% of system memory."**
Could anyone please help me identifying the root cause. Having read different issue related to this it says that the batch size needs to be reduced however trying that doesn't help.
|
closed
|
2020-03-12T21:16:16Z
|
2020-08-14T19:54:25Z
|
https://github.com/google-research/bert/issues/1030
|
[] |
17patelumang
| 7
|
httpie/cli
|
api
| 691
|
Running HTTPie from PyCharm
|
I would like to run the project in pycharm, but it doesn't work. When I run `__main__.py` which is in `httpie/` folder, it just raises the error: `ModuleNotFoundError: No module named '__main__.core'; '__main__' is not a package`.
So how should I do if I would like to run the code by myself?
|
closed
|
2018-07-22T02:18:18Z
|
2018-07-23T16:23:52Z
|
https://github.com/httpie/cli/issues/691
|
[] |
memdreams
| 2
|
kornia/kornia
|
computer-vision
| 2,574
|
Error loading image
|
### Describe the bug
I use code to load image:
```
img: Tensor = K.io.load_image(filepath, K.io.ImageLoadType.RGB32)
img = img[None]
x_gray = K.color.rgb_to_grayscale(img)
```
I get error:
```
File "edge_detector.py", line 7, in edge_detection
img: Tensor = K.io.load_image(filepath, K.io.ImageLoadType.RGB32)
File "/home/tupk/anaconda3/envs/dl/lib/python3.8/site-packages/kornia/io/io.py", line 76, in load_image
image: Tensor = load_image_to_tensor(path_file, device) # CxHxW
File "/home/tupk/anaconda3/envs/dl/lib/python3.8/site-packages/kornia/io/io.py", line 41, in load_image_to_tensor
th_tensor = dlpack.from_dlpack(cv_tensor) # HxWx3
RuntimeError: from_dlpack received an invalid capsule. Note that DLTensor capsules can be consumed only once, so you might have already constructed a tensor from it once.
```
### Reproduction steps
```bash
import kornia as K
from kornia.core import Tensor
import cv2
img: Tensor = K.io.load_image(filepath, K.io.ImageLoadType.RGB32)
img = img[None]
x_gray = K.color.rgb_to_grayscale(img)
```
### Expected behavior
it runs okie
### Environment
```shell
kornia: 0.7.0
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.106.00 Driver Version: 460.106.00 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce GTX 1060 On | 00000000:01:00.0 On | N/A |
| N/A 58C P8 4W / N/A | 100MiB / 6078MiB | 23% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1656 G /usr/lib/xorg/Xorg 52MiB |
| 0 N/A N/A 5372 G /usr/lib/xorg/Xorg 45MiB |
+-----------------------------------------------------------------------------+
```
### Additional context
_No response_
|
closed
|
2023-09-27T04:09:56Z
|
2023-10-04T01:55:52Z
|
https://github.com/kornia/kornia/issues/2574
|
[
"help wanted"
] |
phamkhactu
| 5
|
gradio-app/gradio
|
data-visualization
| 10,799
|
Gradio whitelist FRPC
|
Was chatting with @cocktailpeanut about Gradio share links and how they are currently blocked in Windows due to Windows Defender.
There are programmatic ways to whitelist programs from Windows Defender, see https://stackoverflow.com/questions/40233123/windows-defender-add-exclusion-folder-programmatically
The idea would be add a CLI command:
```
gradio whitelist --windows-defender
```
which downloads & whitelists the FRPC client specifically on windows (this command would need to run with elevated permissions). After you do this, you should be ablle to run Gradio share links in all of your programs
|
open
|
2025-03-12T22:12:12Z
|
2025-03-13T00:16:56Z
|
https://github.com/gradio-app/gradio/issues/10799
|
[
"enhancement"
] |
abidlabs
| 2
|
tiangolo/uwsgi-nginx-flask-docker
|
flask
| 180
|
Build problem
|
Hello
I tried to build a docker image from uwsgi-nginx-flask-docker but I have an error (of beginner ?):
My application tree is:
```
app/
app/
__init__.py
main.py
uwsgi.ini
Dockerfile
```
My Dockerfile is:
```
FROM tiangolo/uwsgi-nginx:python3.8-alpine
COPY ./app /app
WORKDIR /app
```
When I run the build I've the following error:
$ docker image build - < Dockerfile
```
Sending build context to Docker daemon 3.072kB
Step 1/3 : FROM tiangolo/uwsgi-nginx:python3.8-alpine
---> e92f09433957
Step 2/3 : COPY ./app /app
COPY failed: stat /app/list/data/docker/tmp/docker-builder266521360/app: no such file or directory
```
What is wrong ?
Many thanks
|
closed
|
2020-05-13T23:53:02Z
|
2020-06-06T13:27:35Z
|
https://github.com/tiangolo/uwsgi-nginx-flask-docker/issues/180
|
[] |
mlabarre
| 2
|
PokemonGoF/PokemonGo-Bot
|
automation
| 5,458
|
Implementation of PokeSnipers
|
### Short Description
Use Sniping websites like pokemon https://pokesnipers.com/ | pokesniper.org/ | pokesnipe.de/ to find uncommon/rare pokemons.
### Possible solution
http://pokeapi.pokesnipe.de/ this site offers a json api that can be implemented. after going to location and encountering the pokemon(DON'T CATCH IT YET OR YOU'LL GET BANNED) return to the last stable location or your location before using sniper, and then catch it. using this method, one dosen't get banned.
### How it would help others
It would help in getting better "VIP" pokemons
<!-- ==========END OF FEATURE REQUEST SECTION========== -->
|
closed
|
2016-09-15T18:57:35Z
|
2016-09-22T03:25:55Z
|
https://github.com/PokemonGoF/PokemonGo-Bot/issues/5458
|
[] |
bhoot1234567890
| 2
|
ultralytics/yolov5
|
deep-learning
| 13,502
|
Label detection
|
### Search before asking
- [x] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I am using YOLO to detect labels and then extract the text within the detected regions. However, I’m facing an issue with background color variations. If the background color of the label changes, the model struggles to detect it. I don’t have enough images with different background colors to train the model.
Would it be a good approach to train the model using grayscale images to generalize for any background color? Or are there alternative techniques or preprocessing steps that could help improve detection robustness in this scenario? Any suggestions or ideas would be greatly appreciated.
Thank you!
### Additional
_No response_
|
open
|
2025-01-30T13:57:20Z
|
2025-02-06T23:01:24Z
|
https://github.com/ultralytics/yolov5/issues/13502
|
[
"question",
"detect"
] |
Uddeshya1052
| 6
|
widgetti/solara
|
fastapi
| 851
|
docs: more info on use_context and use of global reactives
|
would like to see more documentation on:
- how to avoid issues with sharing same object on a global reactive
- using `use_context` in solara (just duplicate the one from the reacton docs)
|
open
|
2024-11-05T23:52:54Z
|
2024-11-05T23:54:34Z
|
https://github.com/widgetti/solara/issues/851
|
[] |
rileythai
| 0
|
deeppavlov/DeepPavlov
|
tensorflow
| 907
|
ner_model(['Bob Ross lived in Florida']) is giving error
|
raise type(e)(node_def, op, message)
InvalidArgumentError: Requested more than 0 entries, but params is empty. Params shape: [1,7,0]
|
closed
|
2019-06-28T08:34:45Z
|
2020-05-13T09:47:24Z
|
https://github.com/deeppavlov/DeepPavlov/issues/907
|
[] |
puneetkochar016
| 1
|
microsoft/Bringing-Old-Photos-Back-to-Life
|
pytorch
| 220
|
List Index Out of Range
|
I tried two images. Both very old, one with lots of scratches, the other with some. I got `List Index Out of Range` both times.
|
open
|
2022-01-13T02:05:42Z
|
2022-12-04T10:16:40Z
|
https://github.com/microsoft/Bringing-Old-Photos-Back-to-Life/issues/220
|
[] |
yeasir2148
| 6
|
CorentinJ/Real-Time-Voice-Cloning
|
pytorch
| 285
|
AssertionError
|
while sythesizer_preprocess_audio.py running it intrupted by an assetion error at this point every time . can anyone tell me what is happening?
AssertionError
LibriSpeech: 21%|███▊ | 251/1172 [48:17<2:57:10, 11.54s/speakers]
|
closed
|
2020-02-19T06:37:12Z
|
2020-07-04T22:38:57Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/285
|
[] |
AmitBDA
| 1
|
babysor/MockingBird
|
pytorch
| 256
|
demo_toolbox.py 出现 python.exe 中发生了未经处理的win32异常
|
点击 browse 加载音频文件时出错 系统是 win8.1 64位 显卡 amd python 3.7.9 pyQt5 版本 pyQt 5.12.0 会不会pyQt5 版本的问题呢?



|
open
|
2021-12-08T16:01:14Z
|
2021-12-08T16:01:14Z
|
https://github.com/babysor/MockingBird/issues/256
|
[] |
zhenqicai
| 0
|
jonaswinkler/paperless-ng
|
django
| 1,106
|
[BUG] django.db.utils.OperationalError: no such table: documents_document
|
**Describe the bug**
When trying to import PDFs into a fresh installation of Paperless-ng on Archlinux, Paperless complains about a missing `documents_document` table in the database.
Any Idea of help with this issue is greatly appreciated.
**To Reproduce**
Steps to reproduce the behavior:
1. Clean installation of paperless-ng
2. Launch the webserver, consumer and scheduler
3. Drop a PDF onto the web interface or in the consume folder
4. See error
**Expected behavior**
Process the imported PDF.
**Scheduler logs**
```
11:14:56 [Q] INFO Process-1:4 pushing tasks at 21426
11:14:56 [Q] INFO Process-1:1 processing [BRWD812655C0E62_000324.pdf]
11:14:56 [Q] INFO Q Cluster double-timing-missouri-twenty running.
11:14:56 [Q] INFO Process-1:3 monitoring at 21425
11:14:56 [Q] INFO Process-1:1 stopped doing work
11:14:56 [Q] ERROR no such table: django_q_task
11:14:56 [Q] ERROR Failed [BRWD812655C0E62_000324.pdf] - no such table: documents_document : Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/usr/lib/python3.9/site-packages/django/db/backends/sqlite3/base.py", line 423, in execute
return Database.Cursor.execute(self, query, params)
sqlite3.OperationalError: no such table: documents_document
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/django_q/cluster.py", line 430, in worker
res = f(*task["args"], **task["kwargs"])
File "/usr/share/paperless/src/documents/tasks.py", line 74, in consume_file
document = Consumer().try_consume_file(
File "/usr/share/paperless/src/documents/consumer.py", line 197, in try_consume_file
self.pre_check_duplicate()
File "/usr/share/paperless/src/documents/consumer.py", line 94, in pre_check_duplicate
if Document.objects.filter(Q(checksum=checksum) | Q(archive_checksum=checksum)).exists(): # NOQA: E501
File "/usr/lib/python3.9/site-packages/django/db/models/query.py", line 808, in exists
return self.query.has_results(using=self.db)
File "/usr/lib/python3.9/site-packages/django/db/models/sql/query.py", line 550, in has_results
return compiler.has_results()
File "/usr/lib/python3.9/site-packages/django/db/models/sql/compiler.py", line 1145, in has_results
return bool(self.execute_sql(SINGLE))
File "/usr/lib/python3.9/site-packages/django/db/models/sql/compiler.py", line 1175, in execute_sql
cursor.execute(sql, params)
File "/usr/lib/python3.9/site-packages/django/db/backends/utils.py", line 66, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/usr/lib/python3.9/site-packages/django/db/backends/utils.py", line 75, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/usr/lib/python3.9/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/usr/lib/python3.9/site-packages/django/db/utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/lib/python3.9/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/usr/lib/python3.9/site-packages/django/db/backends/sqlite3/base.py", line 423, in execute
return Database.Cursor.execute(self, query, params)
django.db.utils.OperationalError: no such table: documents_document
```
**Relevant information**
- Host OS of the machine running paperless: Archlinux
- Version 1.4.4
- Installation method: bare metal
- dependencies differing from pipfile: django-q (1.3.6 instead of 1.3.4), numpy (1.20.3 instead of 1.19.5), pillow (8.2.0 instead of 8.1), pikepdf (2.12.1 instead of 2.5), scipy (1.6.3 instead of 1.5.4), watchdog (0.10.6 instead of 1.0), inotifyrecursive (0.3.5 instead of 0.3.4), uvloop (0.15.1 instead of 0.14). Since the error seems to be dropped by django itself, I guess none of these dependencies are to blame.
|
closed
|
2021-06-08T06:47:41Z
|
2021-06-19T08:15:33Z
|
https://github.com/jonaswinkler/paperless-ng/issues/1106
|
[] |
amo13
| 4
|
deepset-ai/haystack
|
nlp
| 8,930
|
Remove explicit mention of Haystack "2.x" in cookbooks
|
closed
|
2025-02-25T10:56:07Z
|
2025-03-11T09:05:31Z
|
https://github.com/deepset-ai/haystack/issues/8930
|
[
"P2"
] |
julian-risch
| 0
|
|
strawberry-graphql/strawberry
|
fastapi
| 3,433
|
Add support of permission_classes for type decorator
|
<!--- Provide a general summary of the changes you want in the title above. -->
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Feature Request Type
- [ ] Core functionality
- [x] Alteration (enhancement/optimization) of existing feature(s)
- [ ] New behavior
## Description
As the title describes, the `strawberry.type` decorator lacks the option to set global permission classes, making the code repetitive and highly verbose, improving also DX.
Making a whole query protected with the same permission would result like this:
### Actual solution
```
@strawberry.type
class Query:
user: str = strawberry.field(permission_classes=[IsAuthenticated])
x1: str = strawberry.field(permission_classes=[IsAuthenticated])
x2: str = strawberry.field(permission_classes=[IsAuthenticated])
x3: str = strawberry.field(permission_classes=[IsAuthenticated])
x4: str = strawberry.field(permission_classes=[IsAuthenticated])
```
### Desired solution
```
@strawberry.type(permission_classes=[IsAuthenticated])
class Query:
user: str
x1: str
x2: str
x3: str
```
|
open
|
2024-04-02T17:35:51Z
|
2025-03-20T15:56:40Z
|
https://github.com/strawberry-graphql/strawberry/issues/3433
|
[] |
alexandru0-dev
| 0
|
AirtestProject/Airtest
|
automation
| 284
|
Assertion failed
|
Remove any following parts if does not have details about
paste traceback here
Traceback (most recent call last):
File "C:\Users\admin\AppData\Local\Programs\Python\Python37-32\lib\site-packages\airtest\aircv\sift.py", line 253, in _find_homography
M, mask = cv2.findHomography(sch_pts, src_pts, cv2.RANSAC, 5.0)
cv2.error: OpenCV(3.4.2) C:\projects\opencv-python\opencv\modules\calib3d\src\ptsetreg.cpp:169: error: (-215:Assertion failed) count >= 0 && count2 == count in function 'cv::RANSACPointSetRegistrator::run'
**python version:** `python3.7`
**airtest version:** `1.0.25`
|
closed
|
2019-02-26T07:51:51Z
|
2019-02-28T02:17:56Z
|
https://github.com/AirtestProject/Airtest/issues/284
|
[
"wontfix"
] |
JustinToken
| 3
|
zappa/Zappa
|
flask
| 489
|
[Migrated] Refactor Let's Encrypt implementation to use available packages [proposed code]
|
Originally from: https://github.com/Miserlou/Zappa/issues/1300 by [rgov](https://github.com/rgov)
The Let's Encrypt integration works by invoking the `openssl` command line tool, creating various temporary files, and communicating with the Let's Encrypt certificate authority API directly.
The Python package that Let's Encrypt's `certbot` itself uses is called [`acme`](https://github.com/certbot/certbot/tree/master/acme) and it handles the network protocol. Additionally, the [`cryptography`](https://cryptography.io) package offers functions for generate keys, certificate requests, etc. in-process without invoking a subprocess. Both of these packages are also well-tested and actively developed.
Therefore I would recommend switching to use them in place of the current implementation.
I've made [a gist](https://gist.github.com/rgov/fb97a9585fa18549851d810b1045f0a4) which creates a simple wrapper around the basic functionality I think that's needed:
- `load_private_key` deserializes a PEM private key
- `generate_private_key` generates an asymmetric key pair (2048-bit RSA)
- `generate_csr` creates a certificate signing request for a set of domains
- `get_certificate` communicates with Let's Encrypt to retrieve the certificate
While not necessarily everything you need (perhaps you'd need to serialize out the certificate as a PEM file as well), it should be a good start to improving the implementation.
(The example code may not work without applying [this change](https://github.com/certbot/josepy/pull/5) to the `josepy` package that I proposed.)
|
closed
|
2021-02-20T09:43:23Z
|
2024-04-13T16:36:18Z
|
https://github.com/zappa/Zappa/issues/489
|
[
"no-activity",
"auto-closed"
] |
jneves
| 2
|
google-research/bert
|
nlp
| 1,373
|
how much should be the accuracy of bert base cased on squad 2
|
Hello
I finetuned the bert base cased on squad 2 with the following command:
python run_squad.py \
--vocab_file=$BERT_BASE_DIR/vocab.txt \
--bert_config_file=config.json \
--init_checkpoint=bert_model.ckpt \
--do_train=True \
--train_file=$SQUAD_DIR/train-v2.0.json \
--do_predict=True \
--predict_file=$SQUAD_DIR/dev-v2.0.json \
--train_batch_size=24 \
--learning_rate=3e-5 \
--num_train_epochs=2.0 \
--max_seq_length=384 \
--doc_stride=128 \
--output_dir=~/squad_large/ \
--version_2_with_negative=True \
--null_score_diff_threshold=-2
and get the following output from evaluate script:
{"exact": 62.01465509980628, "f1": 64.47961013334715, "total": 11873, "HasAns_exact": 47.891363022941974, "HasAns_f1": 52.828341955672826, "HasAns_total": 5928, "NoAns_exact": 76.09756097560975, "NoAns_f1": 76.09756097560975, "NoAns_total": 5945, "best_exact": 62.0651899267245, "best_exact_thresh": -2.0197997093200684, "best_f1": 64.51742974575069, "best_f1_thresh": -2.0197997093200684, "pr_exact_ap": 31.64157011302471, "pr_f1_ap": 37.53953936447737, "pr_oracle_ap": 73.56376007315332}
I assume the exact match should be higher(something around 73). Is there somthing that I check it with?
|
open
|
2022-11-20T12:42:14Z
|
2022-11-20T12:42:14Z
|
https://github.com/google-research/bert/issues/1373
|
[] |
navid72m
| 0
|
AUTOMATIC1111/stable-diffusion-webui
|
pytorch
| 15,495
|
[Feature Request]: Button for 'Respect Preferred VAE" in XYZ plot
|
### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
When using XYZ plot, it seems to use whatever VAE is selected in the UI, not the preferred VAE per checkpoint (as specified in JSON files etc).
Example case: Testing some checkpoints across 1.5 and SDXL. XL model has no VAE (or I get error results), 1.5 checkpoints have a couple different VAEs. While 1.5s can also use the same VAE generally fine, definitely need to switch between 1.5 and XL, and the option to respect preferred even for 1.5 checkpoints would be nice.
### Proposed workflow
1. Go to XYZ plot
2. Press checkbox "respect preferred VAE". If no VAE specified as preferred in JSON, use current selected (ie, same as now).
3. Compare 1.5 and XL checkpoints without image generation glitches or errors! Success. Or, compare models using their specified VAEs, ie, comparing models "best outputs".
### Additional information
Thanks
|
open
|
2024-04-12T04:48:15Z
|
2024-04-12T04:48:15Z
|
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15495
|
[
"enhancement"
] |
ewebgh33
| 0
|
sanic-org/sanic
|
asyncio
| 2,751
|
queue.put faild with shared_ctx
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Describe the bug
run app:
python -m sanic aaa.app --workers=4
1. Add a multiprocessing.Queue to shared_ctx
2. make a request, then call queue.put(1)
then get error:
http: LogLevel.ERROR: ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) while doing a GET request to URL: http://127.0.0.1:8000/
Press Ctrl + C , a worker killed;
```
[14399] [INFO] Starting worker [14399]
[14400] [INFO] Starting worker [14400]
[14402] [INFO] Starting worker [14402]
[14401] [INFO] Starting worker [14401]
[14390] [INFO] Received signal SIGINT. Shutting down.
[14390] [INFO] Server Stopped
[14400] [INFO] Stopping worker [14400]
[14399] [INFO] Stopping worker [14399]
[14402] [INFO] Stopping worker [14402]
```
### Code snippet
```
from multiprocessing import Queue
from sanic import Sanic
from sanic.response import text
app = Sanic(__name__)
@app.get('/')
async def index(request):
request.app.shared_ctx.queue.put(1)
return text('Hello')
@app.main_process_start
async def main_start(app):
app.shared_ctx.queue = Queue()
if __name__ == '__main__':
app.run(host='localhost', port=8000, workers=4)
```
### How do you run Sanic?
Sanic CLI
### Operating System
Archlinux
### Sanic Version
23.3.0
|
closed
|
2023-05-11T09:22:33Z
|
2023-05-12T03:26:15Z
|
https://github.com/sanic-org/sanic/issues/2751
|
[
"bug"
] |
muyu525
| 1
|
graphistry/pygraphistry
|
pandas
| 8
|
Fix dependencies of pip package
|
We should probably depend on Pandas since nobody is going to use the direct json API.
|
closed
|
2015-06-25T21:31:49Z
|
2015-08-06T13:53:52Z
|
https://github.com/graphistry/pygraphistry/issues/8
|
[
"bug"
] |
thibaudh
| 1
|
httpie/cli
|
api
| 1,323
|
How to print part of the body
|
I post a URL and get body like this.
I just want to print the value of "token"
How can I do that?
<img width="709" alt="제목 없는 그림" src="https://user-images.githubusercontent.com/43262277/158118069-187f0c8b-ae85-4915-97f7-fae9ed096222.png">
|
closed
|
2022-03-14T06:40:05Z
|
2022-03-15T06:52:50Z
|
https://github.com/httpie/cli/issues/1323
|
[
"question"
] |
zhaohanqing95
| 2
|
wkentaro/labelme
|
deep-learning
| 374
|
JSON to png
|
I know there is file corresponding script to do that https://github.com/wkentaro/labelme/tree/master/examples/semantic_segmentation
But how get png if I have opened json file:
I wrote somethink like this:
```
import json
with open(json_names[0], encoding = 'utf-8') as f:
data = json.load(f)
import numpy
from PIL import Image, ImageDraw
def polygons_to_mask_array(polygons, width : int = 300, height : int = 300) -> np.ndarray:
'''
This function takes a list of lists that contains polygon masks for each building. Example;
[[x11,y11,x12,y12,...],...,[xn1,yn1,xn2,yn2,...]]
The return of this function is an array of size width x height which contains a binary mask
as defined by the list of polygons. This will be the target for our network!
'''
img = Image.new('L', (width, height), 0)
for polygon in polygons:
nested_lst_of_tuples = [tuple(l) for l in polygon['points']]
ImageDraw.Draw(img).polygon(nested_lst_of_tuples, outline=1, fill=1)
mask = np.array(img)
return mask
#plt.imshow()
plt.imshow(polygons_to_mask_array(data['shapes'], 898, 559))
```

Is that okay?
|
closed
|
2019-04-16T13:53:29Z
|
2019-04-16T16:12:50Z
|
https://github.com/wkentaro/labelme/issues/374
|
[] |
Diyago
| 1
|
marimo-team/marimo
|
data-science
| 3,780
|
datashader rasterize not supported?
|
### Describe the bug
Would Marimo be able to support DataShader's rasterise()?
In Jupyter, plots with millions of points can be rasterized before being sent to the browser .
E.g. https://holoviews.org/user_guide/Large_Data.html when converted to Marimo, the rasterized plots are not re-rendered when zoomed in.
### Environment
<details>
```
Replace this line with the output of marimo env. Leave the backticks in place.
```
</details>
### Code to reproduce
https://holoviews.org/user_guide/Large_Data.html
https://raw.githubusercontent.com/holoviz/holoviews/main/examples/user_guide/15-Large_Data.ipynb
|
open
|
2025-02-13T07:04:19Z
|
2025-02-24T08:51:02Z
|
https://github.com/marimo-team/marimo/issues/3780
|
[
"bug"
] |
Jzege
| 6
|
521xueweihan/HelloGitHub
|
python
| 2,786
|
【开源自荐】功能齐全的代码Diff View组件,支持React/Vue平台,参考Github
|
## 推荐项目
<!-- 这里是 HelloGitHub 月刊推荐项目的入口,欢迎自荐和推荐开源项目,唯一要求:请按照下面的提示介绍项目。-->
<!-- 点击上方 “Preview” 立刻查看提交的内容 -->
<!--仅收录 GitHub 上的开源项目,请填写 GitHub 的项目地址-->
- 项目地址:https://github.com/MrWangJustToDo/git-diff-view
<!--请从中选择(C、C#、C++、CSS、Go、Java、JS、Kotlin、Objective-C、PHP、Python、Ruby、Rust、Swift、其它、书籍、机器学习)-->
- 类别:JS
<!--请用 20 个左右的字描述它是做什么的,类似文章标题让人一目了然 -->
- 项目标题:开箱即用Diff View组件,拥有同Github相似的设计
<!--这是个什么项目、能用来干什么、有什么特点或解决了什么痛点,适用于什么场景、能够让初学者学到什么。长度 32-256 字符-->
- 项目描述:一个前端组件,用来0成本实现类Github的代码Diff视图,适用于自建Code Review平台等相关业务需求
<!--令人眼前一亮的点是什么?类比同类型项目有什么特点!-->
- 亮点:
1. 支持React / Vue平台直接使用
2. 分离数据处理 / 视图逻辑,可实现服务端 / webworker 多种性能优化方式
3. 完整且灵活的代码高亮,内置 [highlight.js](https://highlightjs.org/), 可选 [shiki](https://github.com/shikijs/shiki) 等支持 [hast](https://github.com/syntax-tree/hast) 语法树的语法高亮引擎
4. 支持在Diff View中代码行显示评论框等自定义组件
5. 支持纯代码文本差异显示和`git diff` 结果显示
6. 更多Demo 见项目主页
- 截图:
<img width="860" alt="image" src="https://github.com/user-attachments/assets/2a2680f6-5b42-4901-a1cf-df49fa0d7aa9">
<img width="860" alt="image" src="https://github.com/user-attachments/assets/a73600e9-5773-4f72-9717-2e0fcbd093eb">
<img width="1396" alt="image" src="https://github.com/user-attachments/assets/46b56221-553a-4cd8-9c0f-d7017d26c83f">
- 后续更新计划:
1. 支持Vue2以及更多框架平台
2. 内置基本 亮色/暗色 主题
3. 样式自定义更加丰富
4. 增加组件更多自定义
5. more...
|
open
|
2024-07-31T08:17:38Z
|
2024-08-16T02:17:19Z
|
https://github.com/521xueweihan/HelloGitHub/issues/2786
|
[] |
MrWangJustToDo
| 1
|
openapi-generators/openapi-python-client
|
fastapi
| 763
|
openapi-python-client fails to load with typing-extensions==4.6.0
|
**Describe the bug**
I bumped up (the pydantic dependency) `typing-extensions` from 4.5.0 to 4.6.0 in my project, and openapi-python-client fails to load.
Here's the stack trace:
```
File "/opt/hostedtoolcache/Python/3.9.16/x64/bin/openapi-python-client", line 5, in <module>
from openapi_python_client.cli import app
File "/opt/hostedtoolcache/Python/3.9.16/x64/lib/python3.9/site-packages/openapi_python_client/__init__.py", line 21, in <module>
from .parser import GeneratorData, import_string_from_class
File "/opt/hostedtoolcache/Python/3.9.16/x64/lib/python3.9/site-packages/openapi_python_client/parser/__init__.py", line 5, in <module>
from .openapi import GeneratorData, import_string_from_class
File "/opt/hostedtoolcache/Python/3.9.16/x64/lib/python3.9/site-packages/openapi_python_client/parser/openapi.py", line 11, in <module>
from .. import schema as oai
File "/opt/hostedtoolcache/Python/3.9.16/x64/lib/python3.9/site-packages/openapi_python_client/schema/__init__.py", line 19, in <module>
from .openapi_schema_pydantic import (
File "/opt/hostedtoolcache/Python/3.9.16/x64/lib/python3.9/site-packages/openapi_python_client/schema/openapi_schema_pydantic/__init__.py", line 57, in <module>
from .open_api import OpenAPI
File "/opt/hostedtoolcache/Python/3.9.16/x64/lib/python3.9/site-packages/openapi_python_client/schema/openapi_schema_pydantic/open_api.py", line 21, in <module>
class OpenAPI(BaseModel):
File "pydantic/main.py", line 197, in pydantic.main.ModelMetaclass.__new__
File "pydantic/fields.py", line 506, in pydantic.fields.ModelField.infer
File "pydantic/fields.py", line [43] in pydantic.fields.ModelField.__init__
File "pydantic/fields.py", line 5[52] in pydantic.fields.ModelField.prepare
File "pydantic/fields.py", line 663, in pydantic.fields.ModelField._type_analysis
File "pydantic/fields.py", line 808, in pydantic.fields.ModelField._create_sub_type
File "pydantic/fields.py", line 436, in pydantic.fields.ModelField.__init__
File "pydantic/fields.py", line [55] in pydantic.fields.ModelField.prepare
File "pydantic/fields.py", line 668, in pydantic.fields.ModelField._type_analysis
File "/opt/hostedtoolcache/Python/3.9.16/x[64]/lib/python3.9/typing.py", line 852, in __subclasscheck__
return issubclass(cls, self.__origin__)
```
**To Reproduce**
Bump up typing-extensions in your requirements.txt file to 4.6.0
**Expected behavior**
Expect openapi-python-client to work as usual.
**Desktop (please complete the following information):**
- OS: Ubuntu 20.x
- Python Version: 3.9.16
- openapi-python-client version 0.13.1
|
closed
|
2023-05-23T19:24:59Z
|
2023-05-27T00:58:46Z
|
https://github.com/openapi-generators/openapi-python-client/issues/763
|
[
"🐞bug"
] |
pacificsky
| 1
|
KaiyangZhou/deep-person-reid
|
computer-vision
| 10
|
Problem evaluating Market
|
When i try to evaluate market i get the following error
File "/home/konstantinou/virtualenvs/pytorch_python2/local/lib/python2.7/site-packages/torch/nn/modules/linear.py", line 49, in reset_parameters
stdv = 1. / math.sqrt(self.weight.size(1))
RuntimeError: dimension out of range (expected to be in range of [-1, 0], but got 1)
The command i use is the following
python train_img_model_xent.py -d market1501 -a resnet50 --evaluate --resume saved-models/resnet50_xent_market1501.pth.tar --save-dir log/resnet50m_xent_market1501 --test-batch 32
|
closed
|
2018-05-11T12:39:24Z
|
2018-05-11T13:29:42Z
|
https://github.com/KaiyangZhou/deep-person-reid/issues/10
|
[] |
akonstantinou
| 4
|
davidsandberg/facenet
|
computer-vision
| 397
|
how to get a picture‘s embedding vector?
|
I'm a newer , i want to get a new picture‘ vecotr, how should i do ?
|
closed
|
2017-07-26T17:34:45Z
|
2018-07-21T13:37:19Z
|
https://github.com/davidsandberg/facenet/issues/397
|
[] |
wjqy1510
| 4
|
onnx/onnx
|
tensorflow
| 5,853
|
Request for Swish Op
|
# Swish/SiLU
Do you have any plans to implement the Swish Op in ONNX?
### Describe the operator
Swish is a popular Activation fuction. Its mathematical definition could be found at https://en.wikipedia.org/wiki/Swish_function
TensorFLow has https://www.tensorflow.org/api_docs/python/tf/nn/silu
Keras has https://keras.io/api/layers/activations/ (also in https://www.tensorflow.org/api_docs/python/tf/keras/activations/swish)
Pytorch has https://pytorch.org/docs/stable/generated/torch.nn.SiLU.html
### Can this operator be constructed using existing onnx operators?
Yes, it could be implemented as a combination of Mul and Sigmoid Ops:
x * Sigmoid (beta * x)
### Is this operator used by any model currently? Which one?
Yes. Modern Yolo series like yolov5, yolov7, yolov8, yolop and EfficientNet all have such Swish Ops.
Yolov5: https://github.com/ultralytics/yolov5/blob/master/models/tf.py#L224
EfficientNet:
https://paperswithcode.com/method/efficientnet which has Swish in https://github.com/lukemelas/EfficientNet-PyTorch/blob/2eb7a7d264344ddf15d0a06ee99b0dca524c6a07/efficientnet_pytorch/model.py#L294
### Are you willing to contribute it? (Y/N)
Possibly Yes.
### Notes
|
open
|
2024-01-11T08:18:22Z
|
2025-02-01T06:43:04Z
|
https://github.com/onnx/onnx/issues/5853
|
[
"topic: operator",
"stale",
"contributions welcome"
] |
vera121
| 7
|
Zeyi-Lin/HivisionIDPhotos
|
fastapi
| 20
|
Update List - 2024.9.2
|
## Gradio Demo
- [x] Support Set photo KB size
- [x] More Default size
- [x] DockerHub
- [x] Multiple languages
|
closed
|
2024-09-02T05:03:31Z
|
2024-09-03T02:36:29Z
|
https://github.com/Zeyi-Lin/HivisionIDPhotos/issues/20
|
[] |
Zeyi-Lin
| 0
|
facebookresearch/fairseq
|
pytorch
| 4,654
|
A question about adding audio samples to XLSR-53
|
I have already used XLSR-53 successfully, fine-tuning it according : https://huggingface.co/blog/fine-tune-xlsr-wav2vec2
However I have a lot of unlabeled data. I understand that building a Wav2Vec model from scratch does not make sense, so I want to train a model
With my audio but use XLSR-53 as a base - but I could not find an explanation how to do it.
|
open
|
2022-08-16T07:56:35Z
|
2022-08-16T07:56:35Z
|
https://github.com/facebookresearch/fairseq/issues/4654
|
[
"question",
"needs triage"
] |
arikhalperin
| 0
|
CPJKU/madmom
|
numpy
| 196
|
downbeat activations should only contain beats & downbeats, not non-beats
|
closed
|
2016-07-31T06:47:03Z
|
2016-07-31T11:51:21Z
|
https://github.com/CPJKU/madmom/issues/196
|
[] |
superbock
| 0
|
|
2noise/ChatTTS
|
python
| 22
|
建议出一个便携版,方便上手
|
closed
|
2024-05-28T12:51:41Z
|
2024-06-29T06:17:14Z
|
https://github.com/2noise/ChatTTS/issues/22
|
[
"stale",
"ad"
] |
andyhebear
| 1
|
|
sammchardy/python-binance
|
api
| 1,491
|
Api removeal
|
https://github.com/vikky-wire
|
closed
|
2024-12-01T07:45:14Z
|
2024-12-01T14:51:38Z
|
https://github.com/sammchardy/python-binance/issues/1491
|
[] |
Tanimola50
| 1
|
keras-team/keras
|
deep-learning
| 20,158
|
Tensorboard callback is blocking process
|
I am unable to find the transferred issue: https://github.com/keras-team/tf-keras/issues/496
This issue is still occuring and creates performance bottleneck when writing to cloud storage
|
open
|
2024-08-23T18:05:17Z
|
2024-09-05T17:09:59Z
|
https://github.com/keras-team/keras/issues/20158
|
[
"stat:awaiting keras-eng"
] |
rivershah
| 2
|
darrenburns/posting
|
rest-api
| 72
|
Feature request: cURL command parsing
|
It could be very helpful to have the ability to populate the different fields from a cURL command. It could be implemented in different ways but two that I can think of is either a floating pane where the command could be pasted, or allowing direct pasting of the command and detecting if it's a valid one.
|
closed
|
2024-08-02T22:50:52Z
|
2024-08-02T22:57:05Z
|
https://github.com/darrenburns/posting/issues/72
|
[] |
taha-yassine
| 0
|
pyeventsourcing/eventsourcing
|
sqlalchemy
| 109
|
Support Redis streams
|
https://brandur.org/redis-streams
|
closed
|
2017-11-14T11:27:24Z
|
2019-06-12T22:30:18Z
|
https://github.com/pyeventsourcing/eventsourcing/issues/109
|
[
"enhancement",
"help wanted"
] |
johnbywater
| 0
|
paperless-ngx/paperless-ngx
|
django
| 7,737
|
[BUG] Missing Repo for Helm-Chart
|
### Description
In this PR @alexander-bauer wanted to create a dedicated Repo for Helm:
https://github.com/paperless-ngx/paperless-ngx/pull/2119#issuecomment-1374932540
### Steps to reproduce
1. create repo
2. put there the data
### Webserver logs
```bash
*
```
### Browser logs
_No response_
### Paperless-ngx version
*
### Host OS
*
### Installation method
Other (please describe above)
### System status
_No response_
### Browser
_No response_
### Configuration changes
_No response_
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description.
|
closed
|
2024-09-18T22:12:14Z
|
2024-10-20T03:12:03Z
|
https://github.com/paperless-ngx/paperless-ngx/issues/7737
|
[
"not a bug"
] |
genofire
| 3
|
nolar/kopf
|
asyncio
| 1,121
|
Make Probes access logs configurable
|
### Problem
With the current implementation of probes, the default logger, loggs every access request. While this can be beneficial in some cases, it is only noise in the long run. I would like to be able to disable access logs.
Example log entry that i am refering to:
```
{"message": "10.244.0.1 [04/Aug/2024:20:41:11 +0000] \"GET /healthz HTTP/1.1\" 200 221 \"-\" \"kube-probe/1.30\"", "taskName": "Task-583", "remote_address": "10.244.0.1", "request_start_time": "[04/Aug/2024:20:41:11 +0000]", "first_request_line": "GET /healthz HTTP/1.1", "response_status": 200, "response_size": 221, "request_header": {"Referer": "-", "User-Agent": "kube-probe/1.30"}, "timestamp": "2024-08-04T20:41:11.465308+00:00", "severity": "info"}
```
### Proposal
Make access logs configurable as mentioned in the note here: https://docs.aiohttp.org/en/stable/logging.html#access-logs :
```
Use web.run_app(app, access_log=None) to disable access logs.
```
in https://github.com/nolar/kopf/blob/main/kopf/_core/engines/probing.py#L82C26-L82C35
### Code
```python
import kopf
import logging
@kopf.on.startup()
def configure(settings: kopf.OperatorSettings, **_):
settings.probes.access_logs = None
```
### Additional information
Instead of `None` it could also just a enable boolean, like `settings.probes.access_logs.enabled = False`.
Another alternative solution would be to drop the access logs to the DEBUG level, but this would make implementation more complex i think.
Please let me know which direction would be acceptable for a PR
|
open
|
2024-08-04T21:05:59Z
|
2024-08-04T21:06:21Z
|
https://github.com/nolar/kopf/issues/1121
|
[
"enhancement"
] |
Lerentis
| 0
|
tensorlayer/TensorLayer
|
tensorflow
| 753
|
Typo in absolute different error function
|
### New Issue Checklist
- [x] I have read the [Contribution Guidelines](https://github.com/tensorlayer/tensorlayer/blob/master/CONTRIBUTING.md)
- [x] I searched for [existing GitHub issues](https://github.com/tensorlayer/tensorlayer/issues)
### Issue Description
[Typo in absolute different error function]
``def absolute_difference_error(output, target, is_mean=False, name="mean_squared_error_loss")``
Here, ``name`` should be changed to ``absolute_difference_error_loss``. It makes confusion on tensorboard graphs.
|
closed
|
2018-07-25T09:08:31Z
|
2018-07-30T13:52:22Z
|
https://github.com/tensorlayer/TensorLayer/issues/753
|
[] |
thangvubk
| 2
|
xinntao/Real-ESRGAN
|
pytorch
| 152
|
训练log
|
非常感谢~按照您给的训练指导文档,我将超分模型替换成XLSR模型,l_pix loss只能够降低到e-2级别,validation效果不理想,请问一下您能够给下您的realesrnet的training log嘛?
将模型应用到移动端,您有什么建议嘛?
谢谢!
|
open
|
2021-11-09T07:48:29Z
|
2021-11-09T07:48:29Z
|
https://github.com/xinntao/Real-ESRGAN/issues/152
|
[] |
liujianisme
| 0
|
coqui-ai/TTS
|
python
| 2,872
|
[Bug] AttributeError: 'NoneType' object has no attribute 'hop_length'
|
### Describe the bug
I am trying to train the following dataset and I am getting the following error during training:
**train.py**
```
import korean
import os
os.environ['CUDA_VISIBLE_DEVICES'] ="0"
import torch
from trainer import Trainer, TrainerArgs
from TTS.utils.audio import AudioProcessor
from TTS.vocoder.configs import HifiganConfig, MultibandMelganConfig
from TTS.vocoder.datasets.preprocess import load_wav_data
from TTS.vocoder.models.gan import GAN
output_path =os.path.join("./model/vocoder/mbmelgan/")
config = MultibandMelganConfig(
batch_size=256,
eval_batch_size=16,
num_loader_workers=0,
num_eval_loader_workers=0,
run_eval=True,
test_delay_epochs=5,
epochs=100,
seq_len=8192,
pad_short=2000,
use_noise_augment=True,
eval_split_size=10,
print_step=25,
print_eval=False,
mixed_precision=False,
lr_gen=1e-4,
lr_disc=1e-4,
data_path="",
output_path=output_path,
steps_to_start_discriminator=0
)
# init audio processor
ap = AudioProcessor(**config.audio.to_dict())
# load training samples
eval_samples, train_samples=[[],[]]
for path in ["./resample1/kss/wavs","./resample1/pansori_tedxkr/wavs",]:
eval_samples_temp, train_samples_temp = load_wav_data(path, config.eval_split_size)
eval_samples+=eval_samples_temp
train_samples+=train_samples_temp
print(eval_samples)
# init model
model = GAN(config).cuda()
# init the trainer and 🚀
trainer = Trainer(
TrainerArgs(),
config,
output_path,
model=model,
train_samples=train_samples,
eval_samples=eval_samples,
training_assets={"audio_processor": ap},
)
# start training
trainer.fit()
```
### To Reproduce
Run the train.py

### Expected behavior
_No response_
### Logs
```shell
> Setting up Audio Processor...
| > sample_rate:22050
| > resample:False
| > num_mels:80
| > log_func:np.log10
| > min_level_db:-100
| > frame_shift_ms:None
| > frame_length_ms:None
| > ref_level_db:20
| > fft_size:1024
| > power:1.5
| > preemphasis:0.0
| > griffin_lim_iters:60
| > signal_norm:True
| > symmetric_norm:True
| > mel_fmin:0
| > mel_fmax:None
| > pitch_fmin:1.0
| > pitch_fmax:640.0
| > spec_gain:20.0
| > stft_pad_mode:reflect
| > max_norm:4.0
| > clip_norm:True
| > do_trim_silence:True
| > trim_db:45
| > do_sound_norm:False
| > do_amp_to_db_linear:True
| > do_amp_to_db_mel:True
| > do_rms_norm:False
| > db_level:None
| > stats_path:None
| > base:10
| > hop_length:256
| > win_length:1024
['./resample1/kss/wavs/4_0059.wav', './resample1/kss/wavs/1_0061.wav', './resample1/kss/wavs/4_0015.wav', './resample1/kss/wavs/3_0099.wav', './resample1/kss/wavs/1_0054.wav', './resample1/kss/wavs/4_0031.wav', './resample1/kss/wavs/1_0013.wav', './resample1/kss/wavs/4_0019.wav', './resample1/kss/wavs/3_0002.wav', './resample1/kss/wavs/1_0091.wav', './resample1/pansori_tedxkr/wavs/67CV7J6E7Iic-ZBNO2Drz36c-0348.wav', './resample1/pansori_tedxkr/wavs/7J207ISx67KU-znxAJsY__HM-0011.wav', './resample1/pansori_tedxkr/wavs/6rmA7Zic7KCV-grgRnDg-o94-0205.wav', './resample1/pansori_tedxkr/wavs/67CV7J6E7Iic-ZBNO2Drz36c-0040.wav', './resample1/pansori_tedxkr/wavs/7KGw7KO87ZiE-Mcs_1DV6Sgc-0147.wav', './resample1/pansori_tedxkr/wavs/7Iug6re87Iud-8-dSwR5iUyY-0317.wav', './resample1/pansori_tedxkr/wavs/7KCV6rSR7ZmU-Q4rvB0NaxGE-0193.wav', './resample1/pansori_tedxkr/wavs/7KGw7KO87ZiE-Mcs_1DV6Sgc-0050.wav', './resample1/pansori_tedxkr/wavs/7Iah7IiY7Jqp-2B1iXo1c1Tk-0205.wav', './resample1/pansori_tedxkr/wavs/6rOg6rG07ZiB-aWPB0xeM8UA-0378.wav']
> Generator Model: multiband_melgan_generator
> Discriminator Model: melgan_multiscale_discriminator
fatal: not a git repository (or any parent up to mount point /)
Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).
fatal: not a git repository (or any parent up to mount point /)
Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).
> Training Environment:
| > Backend: Torch
| > Mixed precision: False
| > Precision: float32
| > Current device: 0
| > Num. of GPUs: 1
| > Num. of CPUs: 8
| > Num. of Torch Threads: 8
| > Torch seed: 54321
| > Torch CUDNN: True
| > Torch CUDNN deterministic: False
| > Torch CUDNN benchmark: False
| > Torch TF32 MatMul: True
> Start Tensorboard: tensorboard --logdir=./model/vocoder/mbmelgan/run-August-14-2023_06+29PM-0000000
> Model has 6894446 parameters
> EPOCH: 0/100
--> ./model/vocoder/mbmelgan/run-August-14-2023_06+29PM-0000000
! Run is removed from ./model/vocoder/mbmelgan/run-August-14-2023_06+29PM-0000000
Traceback (most recent call last):
File "/opt/miniconda3/envs/tts/lib/python3.9/site-packages/trainer/trainer.py", line 1806, in fit
self._fit()
File "/opt/miniconda3/envs/tts/lib/python3.9/site-packages/trainer/trainer.py", line 1758, in _fit
self.train_epoch()
File "/opt/miniconda3/envs/tts/lib/python3.9/site-packages/trainer/trainer.py", line 1467, in train_epoch
self.train_loader = self.get_train_dataloader(
File "/opt/miniconda3/envs/tts/lib/python3.9/site-packages/trainer/trainer.py", line 931, in get_train_dataloader
return self._get_loader(
File "/opt/miniconda3/envs/tts/lib/python3.9/site-packages/trainer/trainer.py", line 895, in _get_loader
loader = model.get_data_loader(
File "/workspace/tts/coqui-tts/TTS/TTS/vocoder/models/gan.py", line 344, in get_data_loader
hop_len=self.ap.hop_length,
AttributeError: 'NoneType' object has no attribute 'hop_length'
```
### Environment
```shell
pip install TTS
TTS version :0.16.3
Env:Linux Docker container
Pytorch :2.0.1
nvidia-cuda-runtime-cu11 11.7.99
```
### Additional context

|
closed
|
2023-08-14T09:53:11Z
|
2023-08-21T08:35:25Z
|
https://github.com/coqui-ai/TTS/issues/2872
|
[
"bug"
] |
zhanglina94
| 0
|
chatopera/Synonyms
|
nlp
| 78
|
windows下能安装吗
|
closed
|
2019-03-17T08:54:46Z
|
2019-04-21T01:04:11Z
|
https://github.com/chatopera/Synonyms/issues/78
|
[] |
SeekPoint
| 1
|
|
SYSTRAN/faster-whisper
|
deep-learning
| 518
|
Can faster-whisper support TPU?
|
Kaggle and colab both supply TPU, it is much faster than T4. So can faster-whisper support it?
|
closed
|
2023-10-19T10:16:02Z
|
2023-11-26T01:40:51Z
|
https://github.com/SYSTRAN/faster-whisper/issues/518
|
[] |
ILG2021
| 4
|
strawberry-graphql/strawberry
|
asyncio
| 3,470
|
endless integrations with all python framworks
|
Explain to me why developers make integrations with other frameworks? for example, graphene integrated with sqlalchemy, django, flask, and so on. However, sqlalchemy does not integrate with anyone. after all, a good library is one that does not create unnecessary dependencies but simply fits perfectly with any technology stack and does not require integrations.
Is it really impossible to make a good single integration for any frameworks. Example sqlalchemy.
|
closed
|
2024-04-23T08:23:42Z
|
2025-03-20T15:56:42Z
|
https://github.com/strawberry-graphql/strawberry/issues/3470
|
[] |
ArtemIsmagilov
| 1
|
great-expectations/great_expectations
|
data-science
| 11,032
|
Optional limitation to 200 records for UnexpectedRowsExpectation
|
**Is your feature request related to a problem? Please describe.**
I need to retrieve ALL rows that failed the UnexpectedRowsExpectation. Currently, unexpected_rows is limited to 200 rows.
**Describe the solution you'd like**
I need the possibility to disable the limit. An additional boolean parameter would be sufficient that can be used to choose "limitation to 200" (could be the default) or "no limitation".
**Additional context**
See also
https://discourse.greatexpectations.io/t/unexpectedrowsexpectation-unexpected-rows-limit/2026
https://discourse.greatexpectations.io/t/unexpected-list-is-limited-to-200-rows-even-after-i-precise-my-result-format-to-complete/2059
|
open
|
2025-03-17T13:26:30Z
|
2025-03-17T18:21:41Z
|
https://github.com/great-expectations/great_expectations/issues/11032
|
[
"feature-request"
] |
HOsthues
| 0
|
pyqtgraph/pyqtgraph
|
numpy
| 2,790
|
Niggles with pen parameter
|
<!-- In the following, please describe your issue in detail! -->
<!-- If some sections do not apply, just remove them. -->
### Short description
<!-- This should summarize the issue. -->
When adding a pen parameter to a parameter list, the following behavior is observed:
1. The name (title row) is rendered in **bold** face
2. The parameter tree (containing Color, Width, Style, Cap Style, Join Style and Cosmetic) is fully expanded
3. The **alternating** row colors, set to ```True``` in the file ```ParameterTree.py``` are interrupted; **all top rows are always grey**
### Code to reproduce
<!-- Please provide a minimal working example that reproduces the issue in the code block below.
Ideally, this should be a full example someone else could run without additional setup. -->
```python
import pyqtgraph as pg
import numpy as np
colorParams = [
dict(name='Color Settings', type='group', children=[
dict(name='Bin area color', type='color', value=QColor(self.binAreaColor)),
dict(name='Cmp area color', type='color', value=QColor(self.cmpAreaColor)),
dict(name='Rec area color', type='color', value=QColor(self.recAreaColor)),
dict(name='Src area color', type='color', value=QColor(self.srcAreaColor)),
dict(name='Bin area pen', type='pen', color='k', expanded=False ),
dict(name='Cmp area pen', type='pen', color='g', expanded=False ),
dict(name='Rec area pen', type='pen', color='b', expanded=False ),
dict(name='Src area pen', type='pen', color='r', expanded=False ),
dict(name='Analysis color map', type='cmap', value=self.analysisCmap),
dict(name='Inactive color map', type='cmap', value=self.inActiveCmap),
dict(name='ColorMap', type='colormap', value=cmap),
]),
]
self.parameters = pg.parametertree.Parameter.create(name='Analysis Settings', type='group', children=colorParams)
```
### Expected behavior
<!-- What should happen? -->
1. The background color should be consistent with the background color set by ```self.setAlternatingRowColors(True)``` in ```ParameterTree.py```, as is for instance the case with the 'colormap' parameter
2. The parameter tree should preferably default to expanded=False (although you can achieve this by the ```expanded=False``` option
3. The **alternating** row colors should be adhered to, as for instance 'colormap' is doing
### Real behavior
<!-- What happens? -->
The real behavior has been described under 'Short description' above
```
An error occurred?
Post the full traceback inside these 'code fences'!
No serious error; just niggles and inconsistencies :)
```

The picture above shows the issue after collapsing the (sub)trees
* **Bold** face for pen parameter, whereas pen and colormap (correctly) use a plain face
* All top lines for pen items are grey (no alternating background colors being used)
### Tested environment(s)
* PyQtGraph version: 0.13.1
* Qt Python binding: 5.15.4
* Python version: 3.9.5
* NumPy version: 1.20.2
* Operating system: Windows 10
* Installation method: pip
### Additional context
None
|
open
|
2023-08-01T11:51:03Z
|
2023-08-09T02:11:29Z
|
https://github.com/pyqtgraph/pyqtgraph/issues/2790
|
[
"parameterTree"
] |
MrBeee
| 2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.