repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
nltk/nltk
|
nlp
| 3,288
|
Incomplete Language Support in NLTK for Open Multilingual WordNet
|
I've been using NLTK's interface to access the Open Multilingual WordNet (OMW). According to the WordNet website, OMW Version 1 links hand-created wordnets and automatically created wordnets for over 150 languages through the Princeton WordNet of English. However, when I check the available languages using NLTK, I only get the following list:
['als', 'arb', 'bul', 'cat', 'cmn', 'dan', 'ell', 'eng', 'eus', 'fin', 'fra', 'glg', 'heb', 'hrv', 'ind', 'isl', 'ita', 'ita_iwn', 'jpn', 'lit', 'nld', 'nno', 'nob', 'pol', 'por', 'ron', 'slk', 'slv', 'spa', 'swe', 'tha', 'zsm']
It appears that there is a significant discrepancy between the number of languages mentioned on the WordNet website and those available through NLTK.
Questions:
Is there a way to download or access the additional languages that are supposed to be part of OMW Version 1?
Is the limitation of available languages in NLTK a known issue, or is there something I might be doing wrong in my setup?
|
closed
|
2024-07-18T11:47:11Z
|
2024-08-13T06:27:20Z
|
https://github.com/nltk/nltk/issues/3288
|
[] |
zarkua
| 1
|
yeongpin/cursor-free-vip
|
automation
| 123
|
ubuntu24.04 浏览器无法打开
|
🚀 开始注册流程...
🚀 正在打开浏览器...
发生错误:
浏览器连接失败。
地址: 127.0.0.1:27973
提示:
1、用户文件夹没有和已打开的浏览器冲突
2、如为无界面系统,请添加'--headless=new'启动参数
3、如果是Linux系统,尝试添加'--no-sandbox'启动参数
可使用ChromiumOptions设置端口和用户文件夹路径。
版本: 4.1.0.17
系统已经安装chrome最新版:版本 133.0.6943.141(正式版本) (64 位)
firefox最新版:135.0.1 (64 位)
但依然卡在打开浏览器上,应该咋办呢?过不去这关了
|
closed
|
2025-02-28T13:35:30Z
|
2025-03-06T04:22:52Z
|
https://github.com/yeongpin/cursor-free-vip/issues/123
|
[
"documentation"
] |
Alfred1109
| 4
|
NullArray/AutoSploit
|
automation
| 827
|
Divided by zero exception97
|
Error: Attempted to divide by zero.97
|
closed
|
2019-04-19T16:01:16Z
|
2019-04-19T16:37:34Z
|
https://github.com/NullArray/AutoSploit/issues/827
|
[] |
AutosploitReporter
| 0
|
biolab/orange3
|
numpy
| 6,948
|
Group by - Mode of categorical variable is sometimes completely off
|
<!--
Thanks for taking the time to report a bug!
If you're raising an issue about an add-on (i.e., installed via Options > Add-ons), raise an issue in the relevant add-on's issue tracker instead. See: https://github.com/biolab?q=orange3
To fix the bug, we need to be able to reproduce it. Please answer the following questions to the best of your ability.
-->
**What's wrong?**
When grouping by a categorical variable, selecting mode of another categorical variable sometimes produces an incorrect value
**How can we reproduce the problem?**
In the attached worfkflow, the variable "group" is grouped by cluster (apologies for the confusing variable name). In the result, for instance, the grouping of C2 shows in the concatenation that the DOS is the most occurring group, but the mode column shows HCD, which isn't even present in C2.
[Group by - mode bug.zip](https://github.com/user-attachments/files/18021768/Group.by.-.mode.bug.zip)
**What's your environment?**
<!-- To find your Orange version, see "Help → About → Version" or `Orange.version.full_version` in code -->
- Operating system: Mac OS Sequoia
- Orange version: 3.38.0
- How you installed Orange: upgraded from DMG
|
closed
|
2024-12-05T10:48:58Z
|
2024-12-19T12:23:59Z
|
https://github.com/biolab/orange3/issues/6948
|
[
"bug"
] |
wvdvegte
| 2
|
explosion/spaCy
|
data-science
| 13,223
|
Example from https://spacy.io/universe/project/neuralcoref doesn't work for polish
|
## How to reproduce the behaviour
Example from https://spacy.io/universe/project/neuralcoref works with english models:
```python
import spacy
import neuralcoref
nlp = spacy.load('en')
neuralcoref.add_to_pipe(nlp)
doc1 = nlp('My sister has a dog. She loves him.')
print(doc1._.coref_clusters)
doc2 = nlp('Angela lives in Boston. She is quite happy in that city.')
for ent in doc2.ents:
print(ent._.coref_cluster)
```
Which outputs:
```
>> python .\spacy_alt.py
[My sister: [My sister, She], a dog: [a dog, him]]
Boston: [Boston, that city]
```
However if I use either `pl_core_news_lg` or `pl_core_news_sm` like that:
```python
import spacy
import neuralcoref
import pl_core_news_lg
#nlp = spacy.load('en_core_web_sm')
nlp = pl_core_news_lg.load()
neuralcoref.add_to_pipe(nlp)
doc1 = nlp('Moja siostra ma psa. Ona go kocha.')
#doc1 = nlp('My sister has a dog. She loves him.')
print(doc1._.coref_clusters)
doc2 = nlp(u'Anna żyje w Krakowie. Jest szczęśliwa w tym mieście.')
#doc2 = nlp('Angela lives in Boston. She is quite happy in that city.')
for ent in doc2.ents:
print(ent._.coref_cluster)
```
I get following output:
```
>> python .\spacy_alt.py
[]
None
None
```
I was guessing it might be connected to the fact english model is `_web_` and polish is `_news_` however:
```
>> python -m spacy download pl_core_web_sm
✘ No compatible model found for 'pl_core_web_sm' (spaCy v2.3.7).
```
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System: Windows 10 x64
* Python Version Used: Python 3.9.6
* spaCy Version Used: v2.3.7
* Environment Information: most likely irrelevant
|
closed
|
2024-01-08T10:42:48Z
|
2024-01-08T15:10:48Z
|
https://github.com/explosion/spaCy/issues/13223
|
[
"feat / coref"
] |
Zydnar
| 1
|
JaidedAI/EasyOCR
|
machine-learning
| 559
|
Is it possible to find the distance between two words for example how many spaces are between them? Decreasing the width_ths in detect method worked, but at some other places it is not working very well especially when there is a number and word together. E.g. test 80 is detected as one work, however test test are detected as two words.
|
closed
|
2021-10-01T14:53:47Z
|
2021-10-06T08:34:35Z
|
https://github.com/JaidedAI/EasyOCR/issues/559
|
[] |
warraich-a
| 1
|
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
deep-learning
| 1,518
|
Help with Rectangular Images
|
I have rectangular images ( 350 x 700 ) that i am trying to train.
Understand from previous posts that i can include --preprocess crop to train it as such and include --preprocess none in my test set
however when i do this i get this issue RuntimeError: Sizes of tensors must match except in dimension 1 when i run test.py.
I'm thinking this may be because
(1) my image isn't in powers of 2
(2) i didn't use the load_size/ crop_size function
what do you guys think? any input would be appreciated I'm quite exasperated with myself haha
|
closed
|
2022-12-09T09:12:34Z
|
2022-12-09T14:44:32Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1518
|
[] |
taggyhan
| 1
|
SciTools/cartopy
|
matplotlib
| 1,603
|
Clip data array to map extent before plotting
|
### Feature Request
Most of my projects with cartopy involve plotting the same data on many different map domains/zooms. Often, when working with weather data, I'll open a large netcdf file (could be data for the entire globe), but I'm only interested in plotting the data for a few different map domains in North America. As anyone who has experience plotting large datasets will know, this process can be _very_ slow. Often, this is because matplotlib wastes time plotting data that will ultimately be outside of the map extent. The way I speed this up is by taking a subset of the main dataset that covers each domain, with a bit of overlap to make sure I don't get gaps around the edges of the plot. This is fairly easy to do, if you are dealing with an xarray dataset.
My general workflow goes something like this:
```python
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import xarray as xr
# Figure
extent = [-123,-70,20,55]
fig = plt.figure(figsize=(12,9))
ax = fig.add_subplot(111, projection=ccrs.LambertConformal())
ax.set_extent(extent)
ax.add_feature(cfeature.COASTLINE.with_scale('50m'))
ax.add_feature(cfeature.STATES.with_scale('50m'))
# Open precip dataset and clip to domain shape
ds = xr.open_dataset('precip.nc')
padding = 1 # padding around domain
ds = ds.where((ds.longitude>=(extent[0]+360.-padding))&(ds.longitude<=(extent[1]+360.+padding))&
(ds.latitude>=(extent[2]-padding))&(ds.latitude<=(extent[3]+padding)), drop=True)
# Then contour the data
ax.contourf(ds.longitude, ds.latitude, ds['APCP_surface'].values, cmap=..., ...)
```
It would be nice if we could skip the step of selecting a subset of the data (especially because if you're not using xarray, this is very inconvenient), and I could instead simply pass the original global arrays of longitude, latitude, and data to contourf, and any points outside of the map extent would be automatically masked (this assumes the map extent has been set before the data is plotted). I'm not sure how easy this would be to do, but maybe determine all points that are inside of the map window and mask the rest before plotting? Could be a keyword argument in the countourf/pcolormesh/contour call that will mask to a specified extent, or the axis extent if it's already set.
|
closed
|
2020-07-03T16:07:17Z
|
2020-07-11T16:06:59Z
|
https://github.com/SciTools/cartopy/issues/1603
|
[] |
karlwx
| 8
|
ultralytics/ultralytics
|
pytorch
| 19,467
|
feature request about resuming model training with epochs change
|
### Scene
given trained model total 100 epochs done, then I want to resume training based on previous trained model and increasing/decreasing some epochs.
first 100 epochs:
```python
model = YOLO("yolo.pt")
results = model.train(
data="datasets/xxx.yaml",
save_period=2,
epochs=100,
)
```
then, resume training and change epochs:
```python
model = YOLO("epoch90.pt")
results = model.train(
data="datasets/xxx.yaml",
save_period=2,
epochs=300, # more epochs
resume=True, # resume flag
)
```
but `epochs=300` not works, it still stops at 100.
As already pointed in https://github.com/ultralytics/ultralytics/issues/16402 and https://github.com/ultralytics/ultralytics/issues/18154 . I'm wondering why changing epochs when resuming is not supported in current code base?
Is it possible to support it?
|
closed
|
2025-02-27T20:20:20Z
|
2025-03-01T16:26:39Z
|
https://github.com/ultralytics/ultralytics/issues/19467
|
[
"question"
] |
yantaozhao
| 2
|
tableau/server-client-python
|
rest-api
| 719
|
DatasourceItem does not have webpage_url attribute
|
According to the [documentation](https://tableau.github.io/server-client-python/docs/api-ref#data-sources), `DatasourceItem` has a `webpage_url` attribute. The corresponding [tableau server rest api docs](https://help.tableau.com/current/api/rest_api/en-us/REST/rest_api_ref.htm#query_data_source) also appears to return this field.
However, this attribute does not exist in the python client, even though the docs say it does:
```
>>> datasources, _ = server.datasources.get()
>>> datasources[0].website_url
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'DatasourceItem' object has no attribute 'website_url'
```
|
closed
|
2020-11-03T19:59:56Z
|
2020-11-09T23:59:03Z
|
https://github.com/tableau/server-client-python/issues/719
|
[] |
snazzyfox
| 2
|
pytorch/vision
|
machine-learning
| 8,823
|
Setting `90.0+0.0j` and `[-90.0+0.0j, 90.0+0.0j]` to `degrees` argument of `RandomRotation()` gets the indirect error messages
|
### 🐛 Describe the bug
Setting `90.0+0.0j` and `[-90.0+0.0j, 90.0+0.0j]` to `degrees` argument of [RandomRotation()](https://pytorch.org/vision/main/generated/torchvision.transforms.RandomRotation.html) gets the indirect error messages as shown below:
```python
from torchvision.transforms.v2 import RandomRotation
RandomRotation(degrees=90.0+0.0j) # Error
RandomRotation(degrees=[-90.0+0.0j, 90.0+0.0j]) # Error
```
> TypeError: '<' not supported between instances of 'complex' and 'int'
> TypeError: float() argument must be a string or a real number, not 'complex'
So, they should be something direct like as shown below:
> TypeError: 'degrees' argument must be `int` or `float`
### Versions
```python
import torchvision
torchvision.__version__ # '0.20.1+cpu'
```
|
closed
|
2024-12-23T21:52:20Z
|
2025-01-09T17:01:39Z
|
https://github.com/pytorch/vision/issues/8823
|
[] |
hyperkai
| 1
|
recommenders-team/recommenders
|
machine-learning
| 1,867
|
[FEATURE]
|
### Description
<!--- Describe your expected feature in detail -->
### Expected behavior with the suggested feature
<!--- For example: -->
<!--- *Adding algorithm xxx will help people understand more about xxx use case scenarios. -->
### Other Comments
|
closed
|
2022-12-06T14:06:02Z
|
2023-06-30T06:00:20Z
|
https://github.com/recommenders-team/recommenders/issues/1867
|
[
"enhancement"
] |
Ragghack
| 0
|
dask/dask
|
scikit-learn
| 11,396
|
Are there any workarounds for dask breaking altogether with higher amounts of load than what fits into a worker
|
Are there any workarounds for dask breaking altogether with higher amounts of load than what fits into a worker.
Despite there being partitions the finalize step runs the entire thing on a single worker which renders the distributed mnature of things pointless here.
This causes the finalize step to fail in _apply_transformations is there any alternative to make sure the finalize step does not cause issues:
**Minimal Complete Verifiable Example**:
```python
import pandas as pd
import dask.dataframe as dd
from contextlib import asynccontextmanager
from dask.distributed import Client
import dask
import asyncio
import os.path
class DaskClientContext:
def __init__(self, client):
self.client = client
async def __aenter__(self):
return self.client
async def __aexit__(self, exc_type, exc_value, traceback):
print("Exit Called")
await self.client.close()
@asynccontextmanager
async def dask_client_context():
async with Client("tcp://dask-scheduler:8786", asynchronous=True, direct_to_workers=True) as client:
client.restart()
yield client
async def _apply_transfromations(df):
if not isinstance(df, dd.DataFrame):
if isinstance(df, pd.DataFrame):
dask_df = dd.from_pandas(df, npartitions=8)
else:
dask_df = df
# try:
async with dask_client_context() as client:
try:
dask.config.set(
{
"distributed.workers.memory.spill": 0.80,
"distributed.workers.memory.target": 0.70,
"distributed.workers.memory.terminate": 0.99,
"optimization.fuse.ave-width": 4,
"dataframe.shuffle.method": "tasks",
}
)
# dask_df = dask_df.persist()
# result = client.c(dask_df)
result = await client.compute(dask_df)
except RuntimeError as e:
raise ValueError(
f"Error Applying Transformations for: {str(e)}"
)
return {"dataframe": result}
async def _concat_dask_dataframe(df_list):
dask_list = []
combined_meta = pd.DataFrame()
n = 16
for df in df_list:
if not isinstance(df, dd.DataFrame):
if isinstance(df, pd.DataFrame):
npartitions = max(1, len(df) // 20_000)
df = dd.from_pandas(df, npartitions=npartitions, sort=True)
else:
raise ValueError("All elements must be DataFrames")
combined_meta_df_list = combined_meta_df_list.append(df._meta, ignore_index=True)
dask_list.append(df)
combined_meta = pd.concat(combined_meta_df_list, axis=0).iloc[0:0]
combined_meta =dd.utils.make_meta(combined_meta.drop_duplicates(inplace=True))
# print("n", n)
data = dd.concat(dask_list, meta=combined_meta).reset_index(
drop=True
)
data=data.repartition(npartitions=n)
return {"dataframe": data}
#feel free to use any csv with 50,000 rows
df = dd.read_csv("data.csv")
df = await _concat_dask_dataframe([df,df,df,df,df])
df = df["dataframe"]
df = await _concat_dask_dataframe([df,df,df,df,df])
df = df["dataframe"]
df = await _concat_dask_dataframe([df,df,df,df,df])
df = df["dataframe"]
df = await _apply_transfromations(df)
df = df["dataframe"]
```
I do have limited compute here this is the docker compose
```
dask-scheduler:
image: ghcr.io/dask/dask:2024.5.2-py3.12
ports:
- "8786:8786"
- "8787:8787"
command:
- bash
- -c
- |
pip install requests aiohttp
dask-scheduler
environment:
- PYTHONWARNINGS=ignore
networks:
- dask
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
- /data:/data
- ./etc/dask:/etc/dask
deploy:
resources:
limits:
cpus: "2"
memory: "3G"
reservations:
cpus: "2"
memory: "3G"
restart: always
dask-worker:
image: ghcr.io/dask/dask:2024.5.2-py3.12
environment:
- DASK_CONFIG=/etc/dask
- PYTHONWARNINGS=ignore
volumes:
- ./etc/dask:/etc/dask
- ./tmp/dask-worker-space:/tmp/dask-worker-space
- ./data:/data
deploy:
replicas: 3
resources:
limits:
cpus: "2.0"
memory: "3G"
reservations:
cpus: "2.0"
memory: "3G"
command:
- bash
- -c
- |
pip install requests aiohttp
dask worker dask-scheduler:8786
networks:
- dask
extra_hosts:
- "host.docker.internal:host-gateway"
restart: always
networks:
dask:
driver: bridge
default:
external:
name: host
```
**Environment**:
- Dask version:2024.5.2
- Python version:3.12
- Operating System:Ubuntu 22.04
- Install method (conda, pip, source):pip
|
closed
|
2024-09-18T19:41:36Z
|
2024-11-07T19:00:08Z
|
https://github.com/dask/dask/issues/11396
|
[
"needs triage"
] |
Cognitus-Stuti
| 4
|
CorentinJ/Real-Time-Voice-Cloning
|
tensorflow
| 507
|
Tensorflow 2.x implementation of synthesizer
|
As said in the issue « pytorch synthesizer », i’m trying to retrain a synthesizer in tensorflow 2.x (the model is inspired from NVIDIA’s pytorch implementation and is available on my github)
Actually i made some tests and have some results (see below for results, tests and future experience).
## Data and hardware.
For my tests, i try to train it in French with the dataset CommonVoice combined with SIWIS (total of 180k utterances for around 2k speakers)
I use a GeForce GTX 1070 with 6.2Go of RAM
## Preprocessing and encoder
As encoder, i use a siamese network trained on datasets described above
The final siamese achieves a binary-accuracy of 97% (train and valid set), see the issue « alternative approach to encoder » to more details about the model, approach and results
For preprocessing, i use default preprocessing used by the NVIDIA’s tacotron-2 implementation (as i made transfert-learning with this model to speed up training)
## Training procedure and parameters
For training, i have to split the input spectrogram in sub-block of N frames because i don’t have enough memory to train on whole spectrogram in 1 step
The training step is available on my github if you want to see it
Hyperparameters are :
- Batch size : 64 (graph mode) or 48 (eager mode)
- N (nb frames / optimization step) : 15-20 (graph mode) and 40-50 (eager mode)
- Dataset size : around 160 000 samples
- Training / validation size : 90% of the dataset for training and 10% validation
- Optimizer : Adam with epsilon 1e-3 and custom learning-rate scheduler (goeus to 0.00075 to 0.00025)
- Loss : tacotron-loss (inspired by the NVIDIA repo) : sum of Masked-MSE for mel output, Masked-MSE for mel-postnet-output and BCE for gate output
- Training time : around 15sec / step (graph mode) and 20sec / step in eager mode (1 batch on the entire spect) and around 11 hours for 1 epoch (training + validation)
Note : graph mode is a specificity of tensorflow 2.x with the decorator `tf.function`, it is more memory efficient for this model but much faster so i make most of experiments in graph mode (i put only the `call()`method of the decoder in graph mode, the rest is in eager mode because it doesn’t work in graph mode)
To compare the loss, i already trained a tacotron-2 model with this loss (single speaker) and wav becomes interesting with a loss of around 0.5 (mel-postnet-loss around 0.2)
## Results
### Siamese encoder (5 epoch, ~10k steps)
- epoch 4 : loss decreases from 1.27 to 0.95
- epoch 5 : loss decreases from 1.22 to 0.85
### Siamese encoder with additionnal dense
Loss decreases to 1. but not decreases below (only trained for 2-3 epochs because i haven’t enough time...)
### Encoder of this repo
- Loss decreases from 2.8 to 1.8 in epoch 1 (3k steps, batch_size 48 with 40 frames) (eager mode)
Continue training with 15 frames and batch_size 64 (in graph mode) :
- Epoch 2 : avg loss of 1.27
- Epoch 3 : avg loss of 1.22 (min around 1.5)
- Epoch 4 : loss is around 1.14 in first 500 steps
## Future experiments
I think i will train the actual model for 2-3 epoch more and see results, actually the loss is still decreasing during epoch so i hope it will decreases below 0.7 and less in the future
If it is not the case, here is a few ideas to improve the model :
- [x] Add a Dense layer after the encoder-concatenation to reduce the embedding size, like that i can make a full transfert-learning with the pretrained model (actually i make a partial transfert learning because the RNN-decoder have different shapes because of the concatenation of the encoder output)
With this full transfert learning, i could train only the encoder for few steps (to train the new Dense layer) and after that i can train the full model for epochs.
The intuition is that the attention mechanism will be already learned and then the training should be much faster
- [x] It can also be interesting to train the model with the speaker-embeddings embedded with the encoder of this repo (i didn’t do this yet because the embedding of my entire dataset takes so many times with this encoder)
- [ ] Another thing to try could be to train a siamese encoder with embedding-dim 256 (actually, the embedding is 64-dim)
- [ ] I could also try a siamese encoder trained on spectrogram instead of raw audio, it can mayby learn more information about frequencies that can help more the synthesizer
If you have any ideas to improve the model / performances or if you want to use my model to make tests, you can post comments here !
|
closed
|
2020-08-24T11:34:01Z
|
2021-10-29T19:08:28Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/507
|
[] |
Ananas120
| 47
|
serengil/deepface
|
deep-learning
| 639
|
running at least one task for all available models in unit tests
|
[Unit tests](https://github.com/serengil/deepface/blob/master/tests/unit_tests.py) must perform at least on function of verify and represent for all available models. In that way, we can find the early bug before a release.
|
closed
|
2023-01-27T11:57:48Z
|
2023-02-01T19:03:34Z
|
https://github.com/serengil/deepface/issues/639
|
[
"enhancement"
] |
serengil
| 1
|
moshi4/pyCirclize
|
data-visualization
| 70
|
Problem with the .gbk parser
|
Hello, I am having the same issue as https://github.com/moshi4/pyCirclize/issues/33#top
however this is about the gbk file. My gbk have different contigs of different size but only the 1st contig is considered and plotted.
furthermore, the legends are not written as specified with the handles.
could you please help with that.
Thanks
|
closed
|
2024-05-23T19:12:00Z
|
2024-05-24T00:02:31Z
|
https://github.com/moshi4/pyCirclize/issues/70
|
[
"question"
] |
DioufMouss0
| 3
|
python-visualization/folium
|
data-visualization
| 1,949
|
Geocoder plugin not working in Google Chrome
|
**Describe the bug**
Hello! Recently I've noted that the Geocoder plugin isn't working in Google Chrome when you use Chrome to open a saved Folium html file. I've noticed the plugin is working in Microsoft Edge, just not Chrome. In Google Chrome, the Geocoder search bar appears but anything you search in it will inevitably come up with "Nothing found". Is this a new bug?
Example of code I'm running to generate an html file that I open in Chrome:
```
from folium.plugins import Geocoder
Geocoder().add_to(m)
m.save ('test.html')
```
|
closed
|
2024-05-14T15:54:40Z
|
2024-06-18T18:52:30Z
|
https://github.com/python-visualization/folium/issues/1949
|
[] |
joeyoey17
| 3
|
dropbox/PyHive
|
sqlalchemy
| 477
|
missing tag for >=v0.6.4
|
Where are these missing tags?...

|
open
|
2024-06-20T02:06:33Z
|
2024-06-20T02:06:33Z
|
https://github.com/dropbox/PyHive/issues/477
|
[] |
fbpcchen
| 0
|
graphql-python/graphql-core
|
graphql
| 142
|
Improve performance of async execution
|
When executing a query against a schema that contains async resolvers, then the executor method creates some overhead by defining and calling async functions (`async def async_...` inside methods in `execute.py`).
We should check whether pre-defining these functions or static methods of the ExecutionContext could reduce this overhead and boost performance. See also benchmarks provided in #141.
|
open
|
2021-10-14T09:03:56Z
|
2021-12-28T23:48:00Z
|
https://github.com/graphql-python/graphql-core/issues/142
|
[
"help wanted",
"optimization",
"investigate"
] |
Cito
| 6
|
521xueweihan/HelloGitHub
|
python
| 2,875
|
【开源自荐】IntelliConnect:首个基于 LLM 的 Agent智能体物联网平台 🌟
|
IntelliConnect基于 LLM 的 Agent智能体物联网平台内核,它使用了Agent智能体和物模型的抽象设计,使得使用者仅需要稍作定义即可以使得接入的硬件获得AI能力的赋能。
## 主要功能
1. 配备物模型(属性,功能和事件模块)和完善的监控模块
2. 支持多种大模型和先进的Agent智能体技术提供出色的AI智能,可以快速搭建智能物联网应用
3. 支持多种iot协议,使用emqx exhook作为mqtt通讯,可扩展性强
4. 支持OTA空中升级技术
5. 支持微信小程序和微信服务号
6. 使用常见的mysql和redis数据库,上手简单
7. 支持时序数据库influxdb
## 优点
🎨 高度可定制化,方便进行二次开发。
极简主义,层次分明,符合mvc分层结构
完善的物模型抽象,使得iot开发者可以专注于业务本身
AI能力丰富,支持Agent智能体技术,快速开发AI智能应用
项目完全开源,使用 Apache2 协议,基于 Java开发。
> Github: https://github.com/ruanrongman/IntelliConnect 官网: https://ruanrongman.github.io/IntelliConnect/#/
>
> 
>
|
open
|
2024-12-30T08:46:45Z
|
2024-12-30T08:46:45Z
|
https://github.com/521xueweihan/HelloGitHub/issues/2875
|
[] |
ruanrongman
| 0
|
nalepae/pandarallel
|
pandas
| 230
|
Memory and parallelism tuning
|
(1)It seems that memory issues cannot be solved when there is a large amount of data.
(2)If the parallelism is 20, the original data will be copied in 20 copies?
(3)How can I solve the coordination relationship between memory and CPU to set the optimal parameters,please?
|
open
|
2023-04-14T08:56:20Z
|
2024-04-27T10:21:36Z
|
https://github.com/nalepae/pandarallel/issues/230
|
[] |
jamessmith123456
| 4
|
graphistry/pygraphistry
|
pandas
| 323
|
[BUG] layout_tree demo uses non-standard APIs
|
**Describe the bug**
Demo `demos/more_examples/graphistry_features/layout_tree.ipynb` uses non-standard data loaders and renderers instead of Graphistry's, so it's quite surprising and misleading.
|
open
|
2022-03-24T04:10:28Z
|
2022-03-24T04:10:28Z
|
https://github.com/graphistry/pygraphistry/issues/323
|
[
"bug",
"docs"
] |
lmeyerov
| 0
|
piskvorky/gensim
|
nlp
| 3,200
|
Complete triage of PRs for version 4.1.0
|
Gentlemen, there are 6 in-progress PRs that I initially marked for inclusion into the 4.1.0 release [here](https://github.com/RaRe-Technologies/gensim/projects/9).
I propose that we deal with them after the release, as they do not appear to be critical, and need not hold back the release. Can you please have a look at let me know?
|
closed
|
2021-07-22T12:27:58Z
|
2021-07-23T07:46:51Z
|
https://github.com/piskvorky/gensim/issues/3200
|
[
"housekeeping"
] |
mpenkov
| 4
|
raphaelvallat/pingouin
|
pandas
| 221
|
Omega Reliability
|
Thanks for the library.
I think this is one of the first to have something related to reliability in python. In contraposition with R which has many and in particular psych.
I was wondering if there are plans to incorporate [McDonald's Omega](https://personality-project.org/r/psych/HowTo/omega.tutorial/omega.html) function for reliability.
References:
- https://github.com/cran/psych/blob/master/R/omega.R
- I have a simple version here: https://github.com/rafaelvalero/different_notebooks/blob/master/adding_omega.ipynb
|
closed
|
2022-01-01T18:39:14Z
|
2024-09-03T09:51:04Z
|
https://github.com/raphaelvallat/pingouin/issues/221
|
[
"feature request :construction:"
] |
rafaelvalero
| 1
|
marimo-team/marimo
|
data-science
| 3,888
|
Custom Anywidget updates in Jupyter but not in Marimo
|
### Describe the bug
### Describe the bug
Hi, I have a simple anywidget [here](https://github.com/habemus-papadum/anydiff) that shows the diff of two code snippets using codemirror 6. I have a [jupyter notebook](https://github.com/habemus-papadum/anydiff/blob/main/src/anydiff/demo.ipynb) that shows the widget updating as various traitlets are changed and the widget behaves as expected.
When I try to use the [widget in marimo](https://github.com/habemus-papadum/anydiff/blob/main/src/anydiff/marimo_demo.py) using `mo.ui.marimo` wrapper, the widget renders correctly initially, but fails to update if I edit the cell and re-run. So this is not an issue of traitlet syncing not working, but rather replacing the entire widget fails.
the javascript for the widget is [here](https://github.com/habemus-papadum/anydiff/blob/main/src/anydiff/index.js).
### Reproduction
```
git clone https://github.com/habemus-papadum/anydiff.git
uv sync --frozen
source .venv/bin/activate
marimo edit src/anydiff/marimo_demo.py
```
update the value to `code1` and re-eval cell, the output does not update.
### Logs
```shell
```
### System Info
```shell
anywidget==0.9.13
graphviz-anywidget==0.5.0
jupyter==1.1.1
jupyter-console==6.6.3
jupyter-events==0.11.0
jupyter-lsp==2.2.5
jupyter_client==8.6.3
jupyter_core==5.7.2
jupyter_server==2.15.0
jupyter_server_terminals==0.5.3
jupyterlab==4.3.4
jupyterlab_pygments==0.3.0
jupyterlab_server==2.27.3
jupyterlab_widgets==3.0.13
notebook==7.3.2
notebook_shim==0.2.4
System:
OS: macOS 15.3.1
CPU: (10) arm64 Apple M1 Max
Memory: 184.88 MB / 32.00 GB
Shell: 5.9 - /bin/zsh
Browsers:
Chrome: 133.0.6943.127
Safari: 18.3
```
### Severity
annoyance
### Environment
<details>
```
{
"marimo": "0.11.5",
"OS": "Darwin",
"OS Version": "24.3.0",
"Processor": "arm",
"Python Version": "3.12.7",
"Binaries": {
"Browser": "133.0.6943.127",
"Node": "v23.6.0"
},
"Dependencies": {
"click": "8.1.8",
"docutils": "0.21.2",
"itsdangerous": "2.2.0",
"jedi": "0.19.2",
"markdown": "3.7",
"narwhals": "1.25.0",
"packaging": "24.2",
"psutil": "5.9.8",
"pygments": "2.18.0",
"pymdown-extensions": "10.14.3",
"pyyaml": "6.0.2",
"ruff": "0.9.4",
"starlette": "0.45.3",
"tomlkit": "0.13.2",
"typing-extensions": "4.12.2",
"uvicorn": "0.34.0",
"websockets": "14.2"
},
"Optional Dependencies": {
"altair": "5.5.0",
"anywidget": "0.9.13",
"duckdb": "1.1.3",
"pandas": "2.2.3",
"polars": "1.18.0",
"pyarrow": "18.1.0"
},
"Experimental Flags": {}
}
```
</details>
### Code to reproduce
https://github.com/habemus-papadum/anydiff/blob/e68a893c9d695a22a31774d91281f9890bb632c1/src/anydiff/marimo_demo.py#L27
|
closed
|
2025-02-23T20:37:13Z
|
2025-02-25T15:17:42Z
|
https://github.com/marimo-team/marimo/issues/3888
|
[
"bug"
] |
habemus-papadum
| 2
|
pytest-dev/pytest-flask
|
pytest
| 95
|
Rendering Templates with Jinja 2 Does Not Work
|
Hi,
First of all thank you for your great library, I really appreciate your effort.
I am having an issue with testing a function which calls Flask's `render_template` function. Whenever executing the command `pytest` the error shown below is thrown.
```python
def test_generate_notebook_for_experiment():
> notebook = generate_notebook_for_experiment(constants.EXPERIMENT_CONFIG)
test\jupyterlab\jupyter_hub_test.py:16:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
jupyterlab\jupyter_hub.py:31: in generate_notebook_for_experiment
import_code = render_template(f'{TEMPLATE_FOLDER}/imports.pytemplate', configuration = notebook_configuration)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
template_name_or_list = 'notebook/imports.pytemplate'
context = {'configuration': {'collections': {'edge_collections': {'BNL_data_juvnf6wv': 'pcbs_local_BNL_data_juvnf6wv_data_edges'...': 'gduser@Siemens', 'port': 8080, ...}, 'graphname': 'pcbs_local', 'module': 'gdf.contrib.managers.ArangoDBManager'}}}
ctx = None
def render_template(template_name_or_list, **context):
"""Renders a template from the template folder with the given
context.
:param template_name_or_list: the name of the template to be
rendered, or an iterable with template names
the first one existing will be rendered
:param context: the variables that should be available in the
context of the template.
"""
ctx = _app_ctx_stack.top
> ctx.app.update_template_context(context)
E AttributeError: 'NoneType' object has no attribute 'app'
```
The following snippet shows the method to be tested, which invokes the `render_template` function.
```python
from nbformat.v4 import new_notebook, new_code_cell, new_markdown_cell
from nbformat import NotebookNode
from flask import Flask, render_template
def generate_notebook_for_experiment(notebook_configuration: Dict) -> NotebookNode:
"""Create a new Jupyter notebook with template code for the specified experiment."""
TEMPLATE_FOLDER = 'notebook'
title = new_markdown_cell('# Experiment Notebook')
import_code = render_template(f'{TEMPLATE_FOLDER}/imports.pytemplate', configuration = notebook_configuration)
import_code_cell = new_code_cell(import_code, metadata = __create_metadata_for_hidden_cell__())
notebook = new_notebook(
cells=[ title, import_code_cell ],
...
)
return notebook
```
This is the complete test method causing the error:
```python
def test_generate_notebook_for_experiment():
notebook = generate_notebook_for_experiment(constants.EXPERIMENT_CONFIG)
assert notebook.nbformat == 4
nbformat.validate(nbjson=notebook, version=4, version_minor=2)
```
My code for creating the fixture is shown below.
```python
import pytest
from gdrest.application import create_app
@pytest.fixture
def app():
app, _ = create_app()
return app
```
What do I need to do to make sure the Jinja2 runtime environment is appropriately configured by Flask so the function `render_template` is safe to execute? Thank you so much for your answers!
|
closed
|
2019-04-25T19:09:27Z
|
2020-10-04T23:36:07Z
|
https://github.com/pytest-dev/pytest-flask/issues/95
|
[
"question"
] |
agentS
| 1
|
WZMIAOMIAO/deep-learning-for-image-processing
|
pytorch
| 749
|
yolov3_spp train模型运行的时候报错TypeError: 'numpy._DTypeMeta' object is not subscriptable
|
哥哥们知道是怎么回事吗
|
open
|
2023-08-14T16:10:42Z
|
2023-08-14T16:10:42Z
|
https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/749
|
[] |
pastrami06
| 0
|
sanic-org/sanic
|
asyncio
| 2,242
|
Blueprint exceptions are sometimes handled by other blueprint's exception handlers
|
**Describe the bug**
Exceptions thrown from within one blueprint are sometimes handled by the exception handler of another, unrelated blueprint instead of by its own. I have not had time to check whether this happens all the time or only if specific conditions are met, but I have attached a code snippet with which the issue can be reproduced.
**Code snippet**
```py
from sanic import Blueprint, HTTPResponse, Request, Sanic
from sanic.exceptions import SanicException
from sanic.response import text
class BaseException(SanicException):
code: int
description: str
status_code: int
headers: dict = {}
class ExceptionA(BaseException):
error: str
class ExceptionB(BaseException):
code = 0
status_code = 400
class ExceptionBA(ExceptionB, ExceptionA):
error = "foo"
description = "Bar!"
app = Sanic("my_app")
bp1 = Blueprint("auth", url_prefix="/auth")
bp2 = Blueprint("token")
@bp1.route("/network", version=1)
async def error_bp1(_: Request):
raise ExceptionBA()
@bp2.route("/token")
async def hello_bp2(_: Request):
return text("P3rry7hePl4typu5")
bpg1 = Blueprint.group(bp1, version_prefix="/api/v")
bpg2 = Blueprint.group(bp2, url_prefix="/api/oauth2")
for bp in bpg1.blueprints:
@bp.exception(BaseException)
async def _(_, ex: BaseException) -> HTTPResponse:
return text("BPG1_BaseException")
@bp.exception(Exception)
async def _(request: Request, ex: Exception) -> HTTPResponse:
return text("BPG1_Exception")
for bp in bpg2.blueprints:
@bp.exception(ExceptionA)
async def _(_, ex: ExceptionA):
return text("BPG2_ExceptionA")
@bp.exception(Exception)
async def _(request: Request, ex: Exception):
return text("BPG2_Exception")
bpg_all = Blueprint.group(bpg1, bpg2)
app.blueprint(bpg_all)
if __name__ == "__main__":
app.run(debug=True)
```
**Expected behavior**
When accessing `/api/v1/auth/network`, an exception is raised, which is captured by the `bpg1` exception handler; rendering a final response which displays the string `BPG1_BaseException`.
**Actual behavior**
When accessing `/api/v1/auth/network`, an exception is raised, which is (somehow) captured by the `bpg2` exception handler; rendering a final response which displays the string `BPG2_ExceptionA`.
**Environment (please complete the following information):**
- OS: Windows 10 Pro; 21H1; Compilation 19043.1165.
- Versions: Sanic 21.6.2; Routing 0.7.1
|
closed
|
2021-09-17T10:47:30Z
|
2021-09-29T10:47:31Z
|
https://github.com/sanic-org/sanic/issues/2242
|
[] |
prryplatypus
| 2
|
elliotgao2/toapi
|
flask
| 56
|
Expose wsgi inteface for gunicorn or uwsgi.
|
closed
|
2017-12-13T01:20:05Z
|
2017-12-13T03:44:20Z
|
https://github.com/elliotgao2/toapi/issues/56
|
[] |
elliotgao2
| 0
|
|
openapi-generators/openapi-python-client
|
rest-api
| 773
|
generated client doesn't support proxies
|
**Is your feature request related to a problem? Please describe.**
The generated client is nice - clean, ergonomic, and lean - but it doesn't provide the full ability to pass arguments through to `httpx`. In the case I care about right now, I need to pass a proxy configuration to it (and I need to do so in code, I cannot rely on environment variables for an uninteresting reason, which is that I need one application to use different proxies in different places).
**Describe the solution you'd like**
I believe something like #202 would help, but I'm open to other ideas. It occurs to me that the format of the proxies dict that `httpx` accepts is actually itself an implementation detail specific to `httpx`. `requests`, for example, uses keys like `http` and `https` while `httpx`'s proxy keys are `http://` and `https://`.
**Describe alternatives you've considered**
So far I've customized the `client.py.jinja` and `endpoint_module.py.jinja` templates in the obvious way, and it works, but I don't want to be subject to bugs if the templates change in future versions, I'd rather `openapi-python-client` intentionally support some form of proxy config.
|
closed
|
2023-06-27T02:09:34Z
|
2023-07-23T19:38:28Z
|
https://github.com/openapi-generators/openapi-python-client/issues/773
|
[
"✨ enhancement"
] |
leifwalsh
| 3
|
microsoft/nlp-recipes
|
nlp
| 128
|
[Example] Question Answering - BiDAF
|
closed
|
2019-06-25T15:09:13Z
|
2019-07-05T14:48:01Z
|
https://github.com/microsoft/nlp-recipes/issues/128
|
[
"example"
] |
saidbleik
| 0
|
|
autokey/autokey
|
automation
| 473
|
engine.create_phrase() does not enable the phrase.
|
## Classification:
The engine.create_phrase function does not operate as expected, while it does create the phrase it does not "activate" the phrase to the point where it would immediately start working
## Reproducibility:
*Always*
## Version
AutoKey version: current development version
Used GUI (Gtk, Qt, or both): GTK but Qt would presumably be equally effected
If the problem is known to be present in more than one version, please list all of those.
Installed via: git, tested on current develop branch.
Linux Distribution: ubuntu 20.04
## Summary
When using `engine.create_phrase` it does not add the phrase and "activate" as it seems it should. The cause of this seems to be that the a "mode" isn't added anywhere to the phrase in the function.
However adding this mode either manually in the autokey script or in the function itself has unexpected results;
```
folder = engine.get_folder("numberpad")
hot = engine.create_phrase(folder, name="one", hotkey=([],"j"), contents="1")
hot.modes.append(autokey.model.helpers.TriggerMode.HOTKEY)
hot.persist()
```
The phrase object in question will correctly add and save the mode in the json file but the hotkey will not be handled correctly, when the hotkey is triggered it will send both the hotkey and the phrase, so after running this script when I press "j" instead of getting a "1" I get "j1".
If autokey (entire process not closing/reopen window) is restarted after the addition of the key there are no issues and it functions as expected.
## Steps to Reproduce (if applicable)
Adding a phrase should activate the phrase in the autokey internals, hotkeys also "eat" the keystroke used to activate them except in the case of a newly "engine" created hotkey/phrase.
## Expected Results
With the script provided above, I want to press "j" and only output "1".
## Actual Results
It outputs "j1"
## Notes
If you've found a workaround, please provide it here.
I suppose one could add something to restart autokey at the end of the script. But that would be a pretty extreme workaround.
|
closed
|
2020-11-17T16:48:32Z
|
2023-04-26T09:05:25Z
|
https://github.com/autokey/autokey/issues/473
|
[
"invalid"
] |
sebastiansam55
| 3
|
numpy/numpy
|
numpy
| 28,187
|
DOC: Duplication of Fortran compiler installation instructions
|
### Issue with current documentation:
Currently there are two places in the documentation which talks about installing a Fortran compiler on Windows:
- The [building from source page](https://numpy.org/doc/stable/building/index.html)
- The [F2PY and Windows page](https://numpy.org/doc/stable/f2py/windows/index.html)
As noted in https://github.com/numpy/numpy/pull/28141#pullrequestreview-2561035821 I think its best to have all the Fortran-Windows-Compiler stuff in the F2Py docs, and link out to it from the build documentation page.
### Idea or request for content:
_No response_
|
open
|
2025-01-19T15:41:04Z
|
2025-01-19T15:41:12Z
|
https://github.com/numpy/numpy/issues/28187
|
[
"04 - Documentation"
] |
HaoZeke
| 0
|
2noise/ChatTTS
|
python
| 841
|
无法合成数字
|
我在合成电话号码相关的文本时候碰到数字无法合成的问题。具体代码如下:
chat = Chat()
chat.load()
text = "我的电话号码是1234567889“
waveform = chat.infer(text)
此时合成的声音没有后面的数字内容并且打印如下信息
**found invalid characters: {'8', '2', '3', '6', '7', '4', '5', '1'}**
|
closed
|
2024-12-13T10:46:34Z
|
2024-12-17T02:32:41Z
|
https://github.com/2noise/ChatTTS/issues/841
|
[] |
di-osc
| 1
|
iperov/DeepFaceLab
|
deep-learning
| 636
|
SAEHD neural network architecture diagram
|
I'm studying this project recently, because there are many parameters, I can't understand the architecture from a global perspective when reading the network architecture of SAEHD. Can anyone draw a diagram of this architecture? Or give some suggestions, thanks.
|
open
|
2020-02-26T18:45:15Z
|
2023-06-08T20:07:13Z
|
https://github.com/iperov/DeepFaceLab/issues/636
|
[] |
notknowchild
| 6
|
kynan/nbstripout
|
jupyter
| 199
|
Feature request: accept 0 as execution_count
|
Would it be possible to define as a parameter that an `{"execution_count": 0}` could also be accepted and left unchanged (or execution_count maybe ignored altogether)?
The question comes from the fact that Databricks, when not committing outputs, writes 0 there (even if "null" would have been also a good option) -- and therefore nbstripout is always triggered.
|
closed
|
2025-02-24T13:01:59Z
|
2025-03-16T08:32:47Z
|
https://github.com/kynan/nbstripout/issues/199
|
[
"type:question",
"type:documentation",
"resolution:answered"
] |
danielsparing
| 5
|
ageitgey/face_recognition
|
machine-learning
| 1,271
|
CNN Model slow.
|
* face_recognition version: 1.2.3
* Python version: 3.9.0
* Operating System: Windows 10 19041.746 (i5 9600KF, 1660 Super)
### Description
I've created an application which streams a series of images to a 'server' which processes the frames using this face_recognition library. When changing the model from `hog` to `cnn` the server just stops processing and almost freezes. Do I need to install any CUDA packages for the library to use the `cnn` model?
|
open
|
2021-01-26T08:59:00Z
|
2021-03-07T04:55:18Z
|
https://github.com/ageitgey/face_recognition/issues/1271
|
[] |
MattPChoy
| 1
|
erdewit/ib_insync
|
asyncio
| 274
|
We're discarding trade RT Volume times
|
- [Ticker.lastSize](https://github.com/erdewit/ib_insync/blob/master/ib_insync/ticker.py#L54) is `float`
- [TickData.size](https://github.com/erdewit/ib_insync/blob/master/ib_insync/objects.py#L206) is `int`
Is there a reason to do with different asset types?
I'm also noticing some duplicate code where this seems to be set in `Wrapper.tickString()`:
- [block for handling tick types `48` **and** `77`](https://github.com/erdewit/ib_insync/blob/master/ib_insync/wrapper.py#L747-L772) where `Ticker.lastSize` is set as an `int`
- [block for handling tick types `77`](https://github.com/erdewit/ib_insync/blob/master/ib_insync/wrapper.py#L783-L805) where `Ticker.lastSize` is also set as an `int`
Actually these two blocks look almost identical minus [this little nested branch section](https://github.com/erdewit/ib_insync/blob/master/ib_insync/wrapper.py#L754-L757).
I'm pretty sure these could be factored.
I'm only looking at this because I realized we're **always** discarding the [RT Volume](https://interactivebrokers.github.io/tws-api/tick_types.html#rt_volume), [clearing time](https://github.com/erdewit/ib_insync/blob/master/ib_insync/wrapper.py#L752) from the exchange.
This pairs with questions I had in #272.
Normally i would expect tick data to have the exchange clearing time for any trades/volume.
As an example (after I've patched the code - don't worry I'll make a PR) this is what I now see from my streaming tick data after adding a `last_fill_time` field from the exchange clearing time we are discarding:
```python
trio got: {'time': 1593468211.4448721, 'tickType': 3, 'price': 3046.5, 'size': 4, 'last_fill_time': None}
trio got: {'time': 1593468211.6727612, 'tickType': 77, 'price': 3046.25, 'size': 7, 'last_fill_time': 1593468211.081}
trio got: {'time': 1593468211.6734376, 'tickType': 3, 'price': 3046.5, 'size': 7, 'last_fill_time': None}
trio got: {'time': 1593468211.6734376, 'tickType': 3, 'price': 3046.5, 'size': 7, 'last_fill_time': None}
trio got: {'time': 1593468211.6734376, 'tickType': 3, 'price': 3046.5, 'size': 7, 'last_fill_time': None}
trio got: {'time': 1593468211.872757, 'tickType': 77, 'price': 3046.25, 'size': 1, 'last_fill_time': 1593468211.281}
trio got: {'time': 1593468211.872757, 'tickType': 77, 'price': 3046.25, 'size': 1, 'last_fill_time': 1593468211.281}
trio got: {'time': 1593468212.3658435, 'tickType': 48, 'price': 3046.25, 'size': 1, 'last_fill_time': 1593468211.774}
trio got: {'time': 1593468212.3662589, 'tickType': 77, 'price': 3046.25, 'size': 1, 'last_fill_time': 1593468211.774}
trio got: {'time': 1593468212.3925517, 'tickType': 77, 'price': 3046.25, 'size': 8, 'last_fill_time': 1593468211.801}
trio got: {'time': 1593468212.3925517, 'tickType': 77, 'price': 3046.25, 'size': 8, 'last_fill_time': 1593468211.801}
trio got: {'time': 1593468212.6746254, 'tickType': 8, 'price': -1.0, 'size': 1699, 'last_fill_time': None}
trio got: {'time': 1593468212.6753259, 'tickType': 0, 'price': 3046.25, 'size': 5, 'last_fill_time': None}
trio got: {'time': 1593468212.6753259, 'tickType': 3, 'price': 3046.5, 'size': 8, 'last_fill_time': None}
trio got: {'time': 1593468212.6753259, 'tickType': 0, 'price': 3046.25, 'size': 5, 'last_fill_time': None}
trio got: {'time': 1593468212.6753259, 'tickType': 3, 'price': 3046.5, 'size': 8, 'last_fill_time': None}
```
Hey @drusakov778, look you can actually see the exchange -> `ib_insync` latency now :wink:.
|
closed
|
2020-06-29T22:07:27Z
|
2020-09-20T20:41:50Z
|
https://github.com/erdewit/ib_insync/issues/274
|
[] |
goodboy
| 11
|
mljar/mercury
|
data-visualization
| 51
|
add ability to change the PORT for mercury watch command
|
As of `0.5.1` there is no way to change the port for mercury when using `watch`. There are various work arounds using `runserver` and [something external](https://calmcode.io/entr/introduction.html) to control restarts, but this would be a nice convenience.
#50
|
closed
|
2022-02-23T17:01:19Z
|
2023-02-15T10:09:58Z
|
https://github.com/mljar/mercury/issues/51
|
[
"enhancement"
] |
mckeown12
| 1
|
python-visualization/folium
|
data-visualization
| 1,161
|
basemap doesn't appear when changing default crs to EPSG4326
|
#### Please add a code sample or a nbviewer link, copy-pastable if possible
```python
# Your code here
# default crs
m = folium.Map(location=[45.5236, -122.6750], tiles='OpenStreetMap')
m
# change default crs to EPSG4326
m = folium.Map(location=[45.5236, -122.6750], crs='EPSG4326', tiles='OpenStreetMap')
m
```
#### Problem description
the base map is quite different to default after changing crs to EPSG4326. Can anyone explain why? Also if i need to use EPSG4326, how can i still get the correct base map.
[this should explain **why** the current behavior is a problem and what is the expected output.]
#### Expected Output
#### Output of ``folium.__version__``
|
closed
|
2019-06-14T06:45:54Z
|
2020-03-05T02:24:37Z
|
https://github.com/python-visualization/folium/issues/1161
|
[
"question"
] |
KarenChen9999
| 3
|
seanharr11/etlalchemy
|
sqlalchemy
| 17
|
NameError: global name ‘dump_sql_statement’ is not defined
|
.
.
.
.
ETLAlchemySource (INFO) - Creating 'upsert' statements for '3' rows, and dumping to 'demo_tags_sum.sql'.
ETLAlchemySource (INFO) - Creating 'insert' stmts for (the remaining)3 rows, and dumping to 'demo_tags_sum.sql' (because they DNE in the table!).
ETLAlchemySource (INFO) - (3) -- Inserting remaining '3' rows.
Traceback (most recent call last):
File "app.py", line 13, in <module>
migrate()
File "app.py", line 10, in migrate
tgt.migrate(migrate_fks=False, migrate_indexes=True, migrate_data= True, migrate_schema=True)
File "/usr/local/lib/python2.7/site-packages/etlalchemy/ETLAlchemyTarget.py", line 86, in migrate
migrate_data=migrate_data)
File "/usr/local/lib/python2.7/site-packages/etlalchemy/ETLAlchemySource.py", line 1117, in migrate
pks, Session)
File "/usr/local/lib/python2.7/site-packages/etlalchemy/ETLAlchemySource.py", line 846, in dump_data
dump_sql_statement(
NameError: global name '**dump_sql_statement**' is not defined
|
closed
|
2017-04-26T07:51:46Z
|
2017-04-26T13:08:13Z
|
https://github.com/seanharr11/etlalchemy/issues/17
|
[] |
gauravdalvi63
| 1
|
kangvcar/InfoSpider
|
automation
| 16
|
没有微博数据源吗
|
## Bug Report
**Description**: [Description of the issue]
**Expected behavior**: [What should happen]
**Current behavior**: [What happpens instead of the expected behavior]
**Steps to Reproduce**:
1. [First Step]
2. [Second Step]
3. [and so on ¡]
**Reproduce how often**: [What percentage of the time does it reproduce?]
**Possible solution**: [Not obligatory, but suggest a fix/reason for the bug]
**Context (Environment)**:[The code version, python version, operating system or other software/libs you use]
## Additional Information
[Any other useful information about the problem].
|
closed
|
2020-08-30T15:05:43Z
|
2020-08-31T02:17:16Z
|
https://github.com/kangvcar/InfoSpider/issues/16
|
[
"bug"
] |
tianyw0
| 1
|
jofpin/trape
|
flask
| 34
|
Remotely Sending files
|
While sending files from our system to victim system it is showing as below


|
closed
|
2018-02-12T11:08:03Z
|
2018-11-24T01:57:40Z
|
https://github.com/jofpin/trape/issues/34
|
[] |
WinXprt
| 4
|
aleju/imgaug
|
machine-learning
| 764
|
repo owner
|
@aleju, it appears that you stopped being active on GitHub around June last year. I personally hope that everything is fine!
I open this issue to see if there are collaborators that actually have full access to this repo and can do releases. If not, the main question would be if you, @aleju, would be open to someone joining the effort of carrying forward this awesome library. I think many people out there really like and depend on it.
|
open
|
2021-04-20T10:01:47Z
|
2022-09-16T07:26:26Z
|
https://github.com/aleju/imgaug/issues/764
|
[] |
patzm
| 9
|
liangliangyy/DjangoBlog
|
django
| 171
|
大佬 我能不能参考你这个做个毕设
|
菜鸡 毕设选的网页设计 题目不限 但是没啥数据来源 查了查只有博客系统比较简单 。由于只会一点python 所以就搜了一下,了解了一点django,照着网上的教程搞了一下,发现好多不会 所以就来github搜了一下,准备参考一下您的自己实现一下。可以的话就谢谢了
|
closed
|
2018-10-15T13:21:43Z
|
2018-10-16T06:32:00Z
|
https://github.com/liangliangyy/DjangoBlog/issues/171
|
[] |
jy2lhz
| 2
|
zihangdai/xlnet
|
nlp
| 167
|
Can we perform multi-language task on current model?
|
I'm wonder can we use the large model to deal with other language tasks such as Chines, Japanese and Korean. Or the model only support English so far?
|
closed
|
2019-07-17T08:43:19Z
|
2019-07-24T03:57:17Z
|
https://github.com/zihangdai/xlnet/issues/167
|
[] |
MaXXXXfeng
| 2
|
marcomusy/vedo
|
numpy
| 723
|
Question about "cutting" a mesh
|
Hi!
I have a mesh formed by a lot of triangles. I want to know if it's possible to "cut/subset" the faces and points of that mesh with certain position criteria, for example, all faces above z = 1000 or similar.
Thx in advance!
|
closed
|
2022-10-26T20:21:04Z
|
2022-11-07T13:35:08Z
|
https://github.com/marcomusy/vedo/issues/723
|
[] |
ManuGraiph
| 2
|
msoedov/langcorn
|
rest-api
| 21
|
Two Chains in One File
|
Seems like this issue is present if you try to define two chains in one file:
https://github.com/tiangolo/fastapi/discussions/7455
Simple workaround is obviously "don't do that", but often it's useful to define two chains with related purposes in the same file.
I believe the real fix is to ensure that the derived class name is unique not just to the module but to the chain. The endpoint prefix would also need to be unique. So on this line:
https://github.com/msoedov/langcorn/blob/e36e83147d43dcb476a5c984f081330bd56a8d6b/langcorn/server/api.py#L128
Make it read something like:
```
endpoint_prefix = lang_app.replace(":", '.')
```
That way the function name stays with the info all the way through. I haven't tested this fix, but I think it may work if someone has time to test it.
|
closed
|
2023-06-29T16:03:34Z
|
2023-07-12T21:20:32Z
|
https://github.com/msoedov/langcorn/issues/21
|
[] |
kevroy314
| 5
|
dgtlmoon/changedetection.io
|
web-scraping
| 2,543
|
When iframed on a Chromium-based browser, the CSRF token check fails
|
**Describe the bug**
When the CD interface is in an iframe, any edit form causes a CSRF failure. **It works fine on Firefox, though!**
Checking the DevTools, I can definitely see the `csrf_token` sent both on iframed requests and the raw page requests.
I noticed this when trying to embed ChangeDetection.io into the Home Assistant dashboard. It could also be an issue when using a homelab dashboard, such as Organizr, which seems to work through iframing as well.
**Version**
*Exact version* in the top right area: 0.46.02
**To Reproduce**
Steps to reproduce the behavior:
1. Create a sample page with: `<iframe src="http://your-instance-address" width="100%" height="100%"></iframe>`
2. open it on Chrome, Vivaldi, or some other Chrome-sibling.
3. fill any address to the "add a new change detection watch" field and submit it. You can also edit any existing watch.
4. _Bad Request: The CSRF session token is missing._
**Expected behavior**
Work as usual.
**Desktop (please complete the following information):**
- OS: Linux Mint
- Browser Chrome 126.0.6478.182 / Vivaldi 6.8.3381.48 (works fine on Firefox 128.0)
|
open
|
2024-08-04T01:14:58Z
|
2024-08-04T01:19:40Z
|
https://github.com/dgtlmoon/changedetection.io/issues/2543
|
[
"triage"
] |
igorsantos07
| 0
|
facebookresearch/fairseq
|
pytorch
| 5,112
|
STOP dataset BART expirement
|
Hello,
Could you please provide the code of the GT experiment mentioned in the STOP dataset paper: https://arxiv.org/pdf/2207.10643.pdf
The GT is the experiment where you used the ground truth transcription with the BART model and achieved 85.25% EM on the test set.
I appreciate any help you can provide.
|
open
|
2023-05-22T12:41:16Z
|
2023-05-22T12:41:16Z
|
https://github.com/facebookresearch/fairseq/issues/5112
|
[
"question",
"needs triage"
] |
othman-istaiteh
| 0
|
zappa/Zappa
|
flask
| 585
|
[Migrated] pipenv install zappa fails
|
Originally from: https://github.com/Miserlou/Zappa/issues/1528 by [dins](https://github.com/dins)
Installing zappa with pipenv fails because of futures==3.2.0 dependency cannot not be met.
## Context
Installing zappa on new pipenv virtualenv.
## Expected Behavior
Should install zappa with dependencies.
## Actual Behavior
```
Could not find a version that satisfies the requirement futures==3.2.0 (from -r /var/folders/zx/zw2xhqr13nv4t_p386tlwnlr0000gn/T/pipenv-04ld0l04-requirements/pipenv-wgujth9u-requirement.txt (line 1)) (from versions: 0.2.python3, 0.1, 0.2, 1.0, 2.0, 2.1, 2.1.1, 2.1.2, 2.1.3, 2.1.4, 2.1.5, 2.1.6, 2.2.0, 3.0.0, 3.0.1, 3.0.2, 3.0.3, 3.0.4, 3.0.5, 3.1.0, 3.1.1)
```
## Steps to Reproduce
```
pipenv --three # Using python 3.6.5 on OSX
pipenv install zappa
```
## Your Environment
* Zappa version used: 0.46.0
* Operating System and Python version: OSX High Sierra, python 3.6.5
* Pipenv version: 2018.05.18
|
closed
|
2021-02-20T12:23:07Z
|
2022-07-16T07:00:14Z
|
https://github.com/zappa/Zappa/issues/585
|
[] |
jneves
| 1
|
ijl/orjson
|
numpy
| 489
|
OSS-Fuzz Integration
|
hello!
My name is McKenna Dallmeyer and I would like to submit orjson to OSS-Fuzz. If you are not familiar with the project, OSS-Fuzz is Google's platform for continuous fuzzing of Open Source Software. In order to get the most out of this program, it would be greatly beneficial to be able to merge-in my fuzz harness and build scripts into the upstream repository and contribute bug fixes if they come up. Is this something that you would support me putting the effort into?
Thank you!
|
closed
|
2024-05-30T00:49:10Z
|
2024-06-07T08:02:02Z
|
https://github.com/ijl/orjson/issues/489
|
[
"Stale"
] |
ennamarie19
| 0
|
amidaware/tacticalrmm
|
django
| 2,112
|
[Feature request] pre-made global variables
|
**Is your feature request related to a problem? Please describe.**
we have pre-made var for agent, client, site but nothing for global
i was working on a couple of scripts that interact with the api and for easier sharing it would be usefull to have some common global var for everyone
**Describe the solution you'd like**
```
{{global.apiurl}}
{{global.rmmurl}}
{{global.meshurl}}
{{global.baseurl}}
```
they would give:
```
rmm-api.exemple.com
rmm-rmm.exemple.com
rmm-mesh.exemple.com
exemple.com
```
and maybe others ?
**Describe alternatives you've considered**
i made these for myself but it could be usefull for everyone to have them.
|
open
|
2025-01-05T23:37:48Z
|
2025-01-05T23:37:48Z
|
https://github.com/amidaware/tacticalrmm/issues/2112
|
[] |
P6g9YHK6
| 0
|
huggingface/transformers
|
deep-learning
| 36,924
|
<spam>
|
### System Info
test
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
xxx
### Expected behavior
xxxx
|
closed
|
2025-03-24T11:58:09Z
|
2025-03-24T13:55:19Z
|
https://github.com/huggingface/transformers/issues/36924
|
[
"bug"
] |
suhaibo666
| 0
|
uriyyo/fastapi-pagination
|
fastapi
| 751
|
Motor pagination with "aggregate"
|
Hey, I'm trying to paginate over aggregate (using time series) and the `fastapi_pagination.ext.motor.paginate` seems to work only with `find`.
Am I missing something or it's not supported (yet)?
|
closed
|
2023-07-19T09:28:52Z
|
2023-08-15T07:59:32Z
|
https://github.com/uriyyo/fastapi-pagination/issues/751
|
[
"question"
] |
jochman
| 1
|
mlfoundations/open_clip
|
computer-vision
| 932
|
training details for convnext_large_d.laion2B-s26B-b102K-augreg
|
Hi,
I have been looking for training details for convnext_large_d.laion2B-s26B-b102K-augreg but couldn't find anything.
Could you please provide these details? Thanks.
|
closed
|
2024-09-04T01:39:30Z
|
2024-09-05T20:54:22Z
|
https://github.com/mlfoundations/open_clip/issues/932
|
[] |
mumu-mu
| 1
|
koaning/scikit-lego
|
scikit-learn
| 483
|
[FEATURE] New Meta Model: MergedAverage Classifiers via Hashing
|
https://arxiv.org/pdf/1910.13830.pdf
|
closed
|
2021-09-05T14:57:39Z
|
2021-12-21T08:13:59Z
|
https://github.com/koaning/scikit-lego/issues/483
|
[
"enhancement"
] |
koaning
| 1
|
apache/airflow
|
data-science
| 47,167
|
Trino split_statements is default to False
|
### Apache Airflow Provider(s)
trino
### Versions of Apache Airflow Providers
6.0.1
### Apache Airflow version
2.9.0
### Operating System
Red Hat Enterprise Linux 9.5
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
I'm following this [document](https://airflow.apache.org/docs/apache-airflow-providers-trino/stable/operators/trino.html#using-the-operator) to use SQLExecuteQueryOperator to issue multiple queries separated with semicolon to Trino.
I got a syntax error complaining `mismatched input ';'`. I have to add an additional `split_statements=True` parameter to get it work.
### What you think should happen instead
SQLExecuteQueryOperator defers ``split_statements`` to the default value in the ``run`` method of the configured hook. Such is not defined in TrinoHook, so it uses default False from DbApiHook and throws the error.
### How to reproduce
```
SQLExecuteQueryOperator(
task_id='t1',
sql='SELECT 1; SELECT 2;',
conn_id='loading_user_trino' # Adding split_statements=True will work
)
```
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
|
open
|
2025-02-27T19:38:36Z
|
2025-03-10T22:05:45Z
|
https://github.com/apache/airflow/issues/47167
|
[
"kind:bug",
"area:providers",
"good first issue",
"provider:trino"
] |
willshen99
| 3
|
deepspeedai/DeepSpeed
|
deep-learning
| 6,692
|
Installing DeepSpeed in WSL.
|
I am using Windows 11. I have Windows Subsystem for Linux activated (Ubuntu) as well as installed CUDA, and Visual Studio C++ Build tools. I am trying to install deepspeed. However, I am getting the following 2 errors. Could anybody please help resolve this?


|
closed
|
2024-10-30T20:17:24Z
|
2024-11-08T22:11:37Z
|
https://github.com/deepspeedai/DeepSpeed/issues/6692
|
[
"install",
"windows"
] |
anonymous-user803
| 5
|
tensorflow/tensor2tensor
|
machine-learning
| 1,660
|
ERROR: (gcloud.compute.instances.create) Could not fetch resource:
|
### Description
I am getting this error when running the command in **Commands to generate WikisumCommonCrawl**
`python -m tensor2tensor.data_generators.wikisum.parallel_launch \
--num_instances=1000 \
--cpu=1 --mem=2 \
--name=wikisum-cc-refs \
--log_dir=$BUCKET/logs \
--setup_command="pip install tensor2tensor tensorflow -U -q --user" \
--command_prefix="python -m tensor2tensor.data_generators.wikisum.get_references_commoncrawl --num_tasks=1000 --out_dir=$BUCKET/wiki_references --task_id"`
```
# part of output
INFO:tensorflow:Creating instance wikisum-cc-refs-245
INFO:tensorflow:Creating instance wikisum-cc-refs-246
INFO:tensorflow:Creating instance wikisum-cc-refs-247
INFO:tensorflow:Creating instance wikisum-cc-refs-248
ERROR: (gcloud.compute.instances.create) Could not fetch resource:
- The resource 'projects/ml-images/global/images/family/tf-1-7' was not found
```
```
|
closed
|
2019-08-13T14:04:22Z
|
2019-09-01T12:12:33Z
|
https://github.com/tensorflow/tensor2tensor/issues/1660
|
[] |
SKRohit
| 1
|
tensorpack/tensorpack
|
tensorflow
| 1,453
|
train raise WRN The following variables are in the dict, but not found in the graph
|
I train my dataset and model FasterRCNN , raise WRN。
WRN The following variables are in the graph, but not found in the dict: fastrcnn/box/W
WRN The following variables are in the dict, but not found in the graph: group2/block10/conv1/W,
how fix it.
CUDA_VISIBLE_DEVICES=0,1 python train.py \
--config "DATA.VAL=('my_dataset_valid',)" "DATA.TRAIN=('my_dataset_train',)" \
TRAIN.BASE_LR=1e-3 \
MODE_FPN=False \
MODE_MASK=False \
TRAIN.EVAL_PERIOD=0 \
"TRAIN.LR_SCHEDULE=[1000]" \
"PREPROC.TRAIN_SHORT_EDGE_SIZE=[800,1200]" \
TRAIN.CHECKPOINT_PERIOD=1 \
DATA.NUM_WORKERS=2 \
TRAIN.NUM_GPUS=2 \
FRCNN.BATCH_PER_IM=8 \
TRAIN.STEPS_PER_EPOCH=500 \
--load ../pre_train/ImageNet-R101-AlignPadding.npz \
--logdir ../train_log/
|
closed
|
2020-06-05T09:25:13Z
|
2020-06-05T09:39:06Z
|
https://github.com/tensorpack/tensorpack/issues/1453
|
[] |
zbsean
| 1
|
ultrafunkamsterdam/undetected-chromedriver
|
automation
| 1,903
|
[nodriver] how can i use a socks5 proxy with authentication?
|
i have tried this [h](https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1306#issuecomment-1581842980)
```
manifest_json = """
{
"version": "1.0.0",
"manifest_version": 3,
"name": "Chrome Proxy",
"permissions": [
"proxy",
"tabs",
"storage",
"webRequest",
"webRequestAuthProvider"
],
"host_permissions": [
"<all_urls>"
],
"background": {
"service_worker": "background.js"
},
"minimum_chrome_version": "22.0.0"
}
"""
background_js = """
var config = {
mode: "fixed_servers",
rules: {
singleProxy: {
scheme: "http",
host: "%s",
port: parseInt(%s)
},
bypassList: ["localhost"]
}
};
chrome.proxy.settings.set({value: config, scope: "regular"}, function() {});
function callbackFn(details) {
return {
authCredentials: {
username: "%s",
password: "%s"
}
};
}
chrome.webRequest.onAuthRequired.addListener(
callbackFn,
{urls: ["<all_urls>"]},
['blocking']
);
""" % (proxy_host, proxy_port, proxy_login, proxy_pass)
print(proxy_host, proxy_port, proxy_login, proxy_pass)
with open(os.path.join(extension_dir, 'manifest.json'), 'w') as f:
f.write(manifest_json)
with open(os.path.join(extension_dir, 'background.js'), 'w') as f:
f.write(background_js)
driver = await uc.start(browser_args=[f'--load-extension={extension_dir}'])
```
also tried every scheme, http(s), socks4, socks5
extension loads in, but i cant get to any website
proxy is working for sure
|
open
|
2024-05-30T17:00:04Z
|
2025-03-15T23:41:22Z
|
https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1903
|
[] |
rex1de
| 12
|
scikit-image/scikit-image
|
computer-vision
| 7,260
|
`build_docs.yml` bloats size of scikit-image/docs with regular commits to ~1 GiB
|
In [yesterday's community call](https://github.com/scikit-image/skimage-archive/blob/b7c858391664acc0cf7701d4b6948d998eeccc8c/meeting-notes/2023/2023-12-05_europe_africa_cc.md), @stefanv discovered that our [current setup](https://github.com/scikit-image/scikit-image/blob/604d0776f882e2ba6c50a2b1fc254d718edb659a/.github/workflows/build_docs.yml) eventually lead to a very large repo size for https://github.com/scikit-image/docs. Essentially, for every commit to `main` we push a new build of the docs as a commit to scikit-image/docs. Because of git these old commits and their content stays around forever; cloning it right now pulls down ~1 GiB. This is probably not something we want. :sweat_smile:
A few ideas to address this were discussed:
- The old commits (or at least their data) should be removed manually. I am not sure if this can be done while preserving hash sums of doc builds for stable releases.
- To fix the setup going forward, @stefanv proposed to setup [`build_docs.yml`](https://github.com/scikit-image/scikit-image/blob/604d0776f882e2ba6c50a2b1fc254d718edb659a/.github/workflows/build_docs.yml) such that it amends the previous commit if it detects it as its own previous commit.
- Another solution might be to not version dev docs at all and host them somewhere else. GitHub Pages doesn't support redirects but long-term scikit-image.org and devdocs could be moved elsewhere.
|
open
|
2023-12-06T10:43:36Z
|
2024-06-04T02:27:06Z
|
https://github.com/scikit-image/scikit-image/issues/7260
|
[
":robot: type: Infrastructure",
":sleeping: Dormant"
] |
lagru
| 3
|
autokey/autokey
|
automation
| 1,007
|
Modifier key (like `<ctrl>`) not released on releasing hotkey
|
### AutoKey is a Xorg application and will not function in a Wayland session. Do you use Xorg (X11) or Wayland?
Xorg
### Has this issue already been reported?
- [X] I have searched through the existing issues.
### Is this a question rather than an issue?
- [X] This is not a question.
### What type of issue is this?
Bug
### Choose one or more terms that describe this issue:
- [x] autokey triggers
- [ ] autokey-gtk
- [ ] autokey-qt
- [ ] beta
- [X] bug
- [ ] critical
- [ ] development
- [ ] documentation
- [ ] enhancement
- [ ] installation/configuration
- [ ] phrase expansion
- [ ] scripting
- [ ] technical debt
- [ ] user interface
### Other terms that describe this issue if not provided above:
_No response_
### Which Linux distribution did you use?
- Lubuntu LTS 24.04
- Kubuntu LTS 22.04
### Which AutoKey GUI did you use?
Qt
### Which AutoKey version did you use?
Issue occurs on every version I tried. E.g. [`master branch`](https://github.com/autokey/autokey/commit/c1aa63573a918bd6d5ceeada828f7f0f2f9a8483)
### How did you install AutoKey?
Issue occurs on every version, whether with installs from the distribution, nor with `git clone`.
### Can you briefly describe the issue?
Using a hotkey with modifiers (e.g. `<ctrl>+<1>`) to trigger a script, when the hotkey is pressed, and the modifier key is released _before_ the non-modifier key, then the modifier key remains stuck in the 'down' position.
### Can the issue be reproduced?
Always
### What are the steps to reproduce the issue?
1. Create a new Script (you can leave the script empty)
2. Register a Hotkey, e.g. `<ctrl>+<1>` and press "Save" to activate it.
3. Use a keyboard event viewer like <https://w3c.github.io/uievents/tools/key-event-viewer.html>
4. Press `<ctrl>`
5. Press `1` (at this point, the empty script gets triggered)
6. Release `<ctrl>` (make sure that `1` is still pressed!)
7. Release `1` afterwards
8. The focused application will not register the KeyRelease event of the `<ctrl>` key and now has the `<ctrl>` key stuck in the 'down' position
### What should have happened?
The modifier (`<ctrl>` key) should be released
### What actually happened?
The modifier (`<ctrl>` key) was not released
### Do you have screenshots?
_No response_
### Can you provide the output of the AutoKey command?
_No response_
### Anything else?
The problem can be tracked down to `autokey/lib/autokey/interface.py`, to the calls to `window.grab_key` which intercepts the hotkey. While the hotkey is down, modifier events are intercepted as well. Thus, the KeyRelease event of the modifier is intercepted and not passed through to the application. A possible solution is to handle modifier `X.KeyRelease` events in `__flushEvents` and to send them down to the focused window.
<br/>
<hr/>
<details><summary>This repo is using Opire - what does it mean? 👇</summary><br/>💵 Everyone can add rewards for this issue commenting <code>/reward 100</code> (replace <code>100</code> with the amount).<br/>🕵️♂️ If someone starts working on this issue to earn the rewards, they can comment <code>/try</code> to let everyone know!<br/>🙌 And when they open the PR, they can comment <code>/claim #1007</code> either in the PR description or in a PR's comment.<br/><br/>🪙 Also, everyone can tip any user commenting <code>/tip 20 @BenjaminRi</code> (replace <code>20</code> with the amount, and <code>@BenjaminRi</code> with the user to tip).<br/><br/>📖 If you want to learn more, check out our <a href="https://docs.opire.dev">documentation</a>.</details>
|
open
|
2025-01-12T09:55:56Z
|
2025-03-21T05:36:04Z
|
https://github.com/autokey/autokey/issues/1007
|
[
"bug",
"help-wanted",
"autokey triggers",
"0.96.1",
"environment"
] |
BenjaminRi
| 11
|
aiortc/aiortc
|
asyncio
| 873
|
SyntaxError: Unexpected non-whitespace character after JSON at position 4
|
Hey! Currently trying to implement something using [https://github.com/aiortc/aiortc/tree/main/examples/webcam].
And sometimes, out of nowhere, it generates this popup:
Sometimes when I rename a variable.

I believe it is because the .json format that is being sent is invalid, through Offer.
|
closed
|
2023-04-28T19:51:57Z
|
2023-10-26T07:44:48Z
|
https://github.com/aiortc/aiortc/issues/873
|
[] |
atharvnuthi
| 2
|
dask/dask
|
pandas
| 11,481
|
⚠️ Upstream CI failed ⚠️
|
[Workflow Run URL](https://github.com/dask/dask/actions/runs/11663532499)
<details><summary>Python 3.12 Test Summary</summary>
```
dask/dataframe/tests/test_arithmetics_reduction.py::test_reductions_frame_dtypes_numeric_only_supported[std]: AssertionError: Regex pattern did not match.
Regex: "'DatetimeArray' with dtype datetime64.*|'DatetimeArray' does not implement reduction|could not convert|'ArrowStringArray' with dtype string|unsupported operand|no kernel|not supported"
Input: "Cannot perform reduction 'std' with string dtype"
dask/dataframe/tests/test_groupby.py::test_std_object_dtype[disk-sum]: [XPASS(strict)] works in dask-expr
dask/dataframe/tests/test_groupby.py::test_std_object_dtype[tasks-sum]: [XPASS(strict)] works in dask-expr
```
</details>
|
closed
|
2024-11-02T01:59:44Z
|
2024-11-04T13:55:45Z
|
https://github.com/dask/dask/issues/11481
|
[
"upstream"
] |
github-actions[bot]
| 0
|
keras-team/autokeras
|
tensorflow
| 1,133
|
ConvBlock get_config lacks `num_layers` and `max_pooling`
|
### Bug Description
<!---
A clear and concise description of what the bug is.
-->
The get_config methods in ConvBlock does not have `num_layers` and `max_pooling` hyperparameters.
https://github.com/keras-team/autokeras/blob/470651e10ef1d07cb42802516e8161d20d3fbe62/autokeras/hypermodels/basic.py#L194
### Setup Details
Include the details about the versions of:
- OS type and version:
- Python:
- autokeras: 1.0.3
- keras-tuner:
- scikit-learn:
- numpy:
- pandas:
- tensorflow: 2.2.0
|
closed
|
2020-05-16T19:10:38Z
|
2020-06-03T02:31:28Z
|
https://github.com/keras-team/autokeras/issues/1133
|
[
"bug report",
"pinned"
] |
qingquansong
| 0
|
QuivrHQ/quivr
|
api
| 3,603
|
Implement sentry cache hit/miss
|
open
|
2025-02-25T16:49:36Z
|
2025-02-25T16:49:41Z
|
https://github.com/QuivrHQ/quivr/issues/3603
|
[] |
AmineDiro
| 1
|
|
brightmart/text_classification
|
nlp
| 112
|
suggest upgrade to support python3
|
open
|
2019-03-13T12:34:27Z
|
2023-11-13T10:05:56Z
|
https://github.com/brightmart/text_classification/issues/112
|
[] |
kevinew
| 1
|
|
DistrictDataLabs/yellowbrick
|
matplotlib
| 1,190
|
XGBoost Fit: XGBoostError: value 0 for Parameter num_class should be greater equal to 1
|
**Describe the bug**
When calling the .fit() method of the PrecisionRecallCurve class on a XGBoost Multiclass Classifier it raises an error:
`XGBoostError: value 0 for Parameter num_class should be greater equal to 1
num_class: Number of output class in the multi-class classification.`
**To Reproduce**
```python
# Train a XGBClassifieir with your data and parameters after optimization.
classes = np.unique(y_train)
est = xgb.XGBClassifier()
clf = clone(est).set_params(**best_trial.params)
model = clf.fit(x_train, y_train)
# Prediction evaluation
y_pred = clf.predict(x_test)
# ROC Curve from YellowBricks (working)
visualizer = ROCAUC(model, classes=classes)
visualizer.fit(x_train, y_train)
visualizer.score(x_test, y_test)
# PreRec Curve from YellowBricks (error)
viz = PrecisionRecallCurve(model, classes=classes)
viz.fit(x_train, y_train)
viz.score(x_test, y_test)
```
**Dataset**
I use my own dataset and it is not the issue as it is working for 9+ other ML methods.
**Expected behavior**
I expect num_class to be set automatically as it is supposed to be done when calling .fit() on a XGBClassifier.
**Traceback**
Yellowbricks code for PRCruve is in prcurve.py as I tried to extract the code to work on it after the error, not successful.
```python-traceback
---------------------------------------------------------------------------
XGBoostError Traceback (most recent call last)
~/scikit_ML_Pipeline_Binary_Notebook/modeling_methods.py in run_XGB_full(x_train, y_train, x_test, y_test, randSeed, i, param_grid, name_path, hype_cv, n_trials, scoring_metric, timeout, wd_path, output_folder, algorithm, data_name, type_average)
1516 # PRECISION RECALL - For each class
1517 viz = PrecisionRecallCurve(model, classes=classes)
-> 1518 viz.fit(x_train, y_train)
1519 viz.score(x_test, y_test)
1520 prec = viz.precision_["micro"]
~/scikit_ML_Pipeline_Binary_Notebook/prcurve.py in fit(self, X, y)
286
287 # Fit the model and return self
--> 288 return super(PrecisionRecallCurve, self).fit(X, Y)
289
290 def score(self, X, y):
~/anaconda3/envs/ML-pipeline/lib/python3.9/site-packages/yellowbrick/classifier/base.py in fit(self, X, y, **kwargs)
174 """
175 # Super fits the wrapped estimator
--> 176 super(ClassificationScoreVisualizer, self).fit(X, y, **kwargs)
177
178 # Extract the classes and the class counts from the target
~/anaconda3/envs/ML-pipeline/lib/python3.9/site-packages/yellowbrick/base.py in fit(self, X, y, **kwargs)
388 """
389 if not check_fitted(self.estimator, is_fitted_by=self.is_fitted):
--> 390 self.estimator.fit(X, y, **kwargs)
391 return self
392
~/anaconda3/envs/ML-pipeline/lib/python3.9/site-packages/sklearn/multiclass.py in fit(self, X, y)
279 # n_jobs > 1 in can results in slower performance due to the overhead
280 # of spawning threads. See joblib issue #112.
--> 281 self.estimators_ = Parallel(n_jobs=self.n_jobs)(delayed(_fit_binary)(
282 self.estimator, X, column, classes=[
283 "not %s" % self.label_binarizer_.classes_[i],
~/anaconda3/envs/ML-pipeline/lib/python3.9/site-packages/joblib/parallel.py in __call__(self, iterable)
1039 # remaining jobs.
1040 self._iterating = False
-> 1041 if self.dispatch_one_batch(iterator):
1042 self._iterating = self._original_iterator is not None
1043
~/anaconda3/envs/ML-pipeline/lib/python3.9/site-packages/joblib/parallel.py in dispatch_one_batch(self, iterator)
857 return False
858 else:
--> 859 self._dispatch(tasks)
860 return True
861
~/anaconda3/envs/ML-pipeline/lib/python3.9/site-packages/joblib/parallel.py in _dispatch(self, batch)
775 with self._lock:
776 job_idx = len(self._jobs)
--> 777 job = self._backend.apply_async(batch, callback=cb)
778 # A job can complete so quickly than its callback is
779 # called before we get here, causing self._jobs to
~/anaconda3/envs/ML-pipeline/lib/python3.9/site-packages/joblib/_parallel_backends.py in apply_async(self, func, callback)
206 def apply_async(self, func, callback=None):
207 """Schedule a func to be run"""
--> 208 result = ImmediateResult(func)
209 if callback:
210 callback(result)
~/anaconda3/envs/ML-pipeline/lib/python3.9/site-packages/joblib/_parallel_backends.py in __init__(self, batch)
570 # Don't delay the application, to avoid keeping the input
571 # arguments in memory
--> 572 self.results = batch()
573
574 def get(self):
~/anaconda3/envs/ML-pipeline/lib/python3.9/site-packages/joblib/parallel.py in __call__(self)
260 # change the default number of processes to -1
261 with parallel_backend(self._backend, n_jobs=self._n_jobs):
--> 262 return [func(*args, **kwargs)
263 for func, args, kwargs in self.items]
264
~/anaconda3/envs/ML-pipeline/lib/python3.9/site-packages/joblib/parallel.py in <listcomp>(.0)
260 # change the default number of processes to -1
261 with parallel_backend(self._backend, n_jobs=self._n_jobs):
--> 262 return [func(*args, **kwargs)
263 for func, args, kwargs in self.items]
264
~/anaconda3/envs/ML-pipeline/lib/python3.9/site-packages/sklearn/utils/fixes.py in __call__(self, *args, **kwargs)
220 def __call__(self, *args, **kwargs):
221 with config_context(**self.config):
--> 222 return self.function(*args, **kwargs)
~/anaconda3/envs/ML-pipeline/lib/python3.9/site-packages/sklearn/multiclass.py in _fit_binary(estimator, X, y, classes)
83 else:
84 estimator = clone(estimator)
---> 85 estimator.fit(X, y)
86 return estimator
87
~/anaconda3/envs/ML-pipeline/lib/python3.9/site-packages/xgboost/core.py in inner_f(*args, **kwargs)
431 for k, arg in zip(sig.parameters, args):
432 kwargs[k] = arg
--> 433 return f(**kwargs)
434
435 return inner_f
~/anaconda3/envs/ML-pipeline/lib/python3.9/site-packages/xgboost/sklearn.py in fit(self, X, y, sample_weight, base_margin, eval_set, eval_metric, early_stopping_rounds, verbose, xgb_model, sample_weight_eval_set, base_margin_eval_set, feature_weights, callbacks)
1174 )
1175
-> 1176 self._Booster = train(
1177 params,
1178 train_dmatrix,
~/anaconda3/envs/ML-pipeline/lib/python3.9/site-packages/xgboost/training.py in train(params, dtrain, num_boost_round, evals, obj, feval, maximize, early_stopping_rounds, evals_result, verbose_eval, xgb_model, callbacks)
187 Booster : a trained booster model
188 """
--> 189 bst = _train_internal(params, dtrain,
190 num_boost_round=num_boost_round,
191 evals=evals,
~/anaconda3/envs/ML-pipeline/lib/python3.9/site-packages/xgboost/training.py in _train_internal(params, dtrain, num_boost_round, evals, obj, feval, xgb_model, callbacks, evals_result, maximize, verbose_eval, early_stopping_rounds)
79 if callbacks.before_iteration(bst, i, dtrain, evals):
80 break
---> 81 bst.update(dtrain, i, obj)
82 if callbacks.after_iteration(bst, i, dtrain, evals):
83 break
~/anaconda3/envs/ML-pipeline/lib/python3.9/site-packages/xgboost/core.py in update(self, dtrain, iteration, fobj)
1494
1495 if fobj is None:
-> 1496 _check_call(_LIB.XGBoosterUpdateOneIter(self.handle,
1497 ctypes.c_int(iteration),
1498 dtrain.handle))
~/anaconda3/envs/ML-pipeline/lib/python3.9/site-packages/xgboost/core.py in _check_call(ret)
208 """
209 if ret != 0:
--> 210 raise XGBoostError(py_str(_LIB.XGBGetLastError()))
211
212
XGBoostError: value 0 for Parameter num_class should be greater equal to 1
num_class: Number of output class in the multi-class classification.
```
**Desktop (please complete the following information):**
- OS: Ubuntu 20.04
- Python Version: Anaconda with Python 3.9.5
- Yellowbrick Version: 1.3.post1
**Additional context**
https://stackoverflow.com/questions/40116215/xgboost-sklearn-wrapper-value-0for-parameter-num-class-should-be-greater-equal-t
As per the stackoverflow link, XGBoost is supposed to set automatically this parameter. This is not the case.
I spend hours and hours trying to find a workaround, setting it by hand, before, but also in the .fit() method. Trying to skip the .fit() method as my Classifier is already trained......
Nothing works, I'm kind of depressed, have anyone used Yellowbrick PreRec Curve with XGboost ? Seem's weird that the AUC Curve does not throw any errors.
|
closed
|
2021-07-13T12:30:49Z
|
2021-08-16T15:41:23Z
|
https://github.com/DistrictDataLabs/yellowbrick/issues/1190
|
[] |
lambda-science
| 3
|
milesmcc/shynet
|
django
| 284
|
Github container's last publish date was over 3 years ago.
|
1. Ghcr.io's lasted container publish date was over 3 years ago, people are still downloading it.
2. Docker Hub provides a very restrictive rate limit for anonymous users at 100 pulls every 6 hours per IP.
For authenticated users, its 200 pulls every 6 hours per ip.
Paid Users can get up to 5000 pull a day.
Would it be possible to mirror the package pushes to both docker hub **and** ghcr.io?
The workflows can be changed to easily implement this change and even streamline the workflows. We would need to change the workflow permissions in this repository to enable `Read and write permissions` to write new packages.

**Additional Context**
https://docs.docker.com/docker-hub/download-rate-limit/
|
closed
|
2023-09-19T14:19:41Z
|
2023-09-23T03:24:53Z
|
https://github.com/milesmcc/shynet/issues/284
|
[] |
kashalls
| 0
|
onnx/onnx
|
scikit-learn
| 5,878
|
OSError: SavedModel file does not exist at: /content/drive/MyDrive/craft_mlt_25k.h5/{saved_model.pbtxt|saved_model.pb}
|
open
|
2024-01-29T12:17:08Z
|
2025-01-29T06:43:48Z
|
https://github.com/onnx/onnx/issues/5878
|
[
"stale"
] |
RNVALA
| 1
|
|
blb-ventures/strawberry-django-plus
|
graphql
| 9
|
Input field Optional[OneToManyInput] is a required field in the resulting schema
|
Not sure if this is expected behaviour, but if I define an input type like this:
```python
@gql.django.input(ServiceInstance)
class ServiceInstancePartialInput(NodeInput):
id: Optional[gql.ID]
service: Optional[OneToManyInput]
```
The resulting schema marks 'service' as required field:
<img width="204" alt="image" src="https://user-images.githubusercontent.com/2694872/154663699-bd3ba51e-66eb-4647-8763-fc823acfa0d6.png">
~I can pass an empty object into service, since `set` is not required, but I'd expect `service` to be not required instead.~
Edit: the schema allows for it, but if I do that the mutation fails.
|
closed
|
2022-02-18T10:19:47Z
|
2022-02-28T20:19:35Z
|
https://github.com/blb-ventures/strawberry-django-plus/issues/9
|
[] |
gersmann
| 8
|
Lightning-AI/LitServe
|
api
| 272
|
Supporting query params alongside file upload
|
### ⚠️ BEFORE SUBMITTING, READ:
We're excited for your request! However, here are things we are not interested in:
- Decorators.
- Doing the same thing in multiple ways.
- Adding more layers of abstraction... tree-depth should be 1 at most.
- Features that over-engineer or complicate the code internals.
- Linters, and crud that complicates projects.
----
## 🚀 Feature
Hello team,
Please correct me if I'm wrong but I couldn't find a way to specify query parameters for my endpoint when using a request annotated with FileUpload in `decode_request`. Is it possible to add support for them?
### Motivation
It's sometimes useful to specify additional parameters in the request in order to control inference.
|
closed
|
2024-09-07T01:04:52Z
|
2024-09-09T07:02:17Z
|
https://github.com/Lightning-AI/LitServe/issues/272
|
[
"enhancement",
"help wanted"
] |
energydrink9
| 4
|
ivy-llc/ivy
|
numpy
| 27,936
|
Fix Ivy Failing Test: torch - elementwise.multiply
|
closed
|
2024-01-17T10:41:48Z
|
2024-01-17T11:00:31Z
|
https://github.com/ivy-llc/ivy/issues/27936
|
[
"Sub Task"
] |
samthakur587
| 1
|
|
K3D-tools/K3D-jupyter
|
jupyter
| 255
|
auto_rendering locks view in k3d 2.9.1
|
In 2.8.2 I could set `auto_rendering` keyword argument of `k3d.plot` to `False` and freely rotate the view with the mouse. I use this to multiple changes and then explicitly call `k3d.plot.render()` for optimization purposes
In version 2.9.2 setting `auto_rendering` to `False` leads to the view being stuck (am able to correctly add objects though)
Code to reproduce:
```
import k3d
k3d.plot(auto_rendering=False)
```
Jupyterlab info:
```
JupyterLab v2.2.9
Known labextensions:
app dir: /home/miron/.pyenv/versions/3.8.7/envs/deps-update-jan/share/jupyter/lab
@jupyter-voila/jupyterlab-preview v1.1.0 enabled OK
@jupyter-widgets/jupyterlab-manager v2.0.0 enabled OK
ipysheet v0.4.4 enabled OK
jupyter-matplotlib v0.7.4 enabled OK
jupyterlab-datawidgets v7.0.0 enabled OK
jupyterlab-plotly v4.14.1 enabled OK
k3d v2.9.1 enabled OK
plotlywidget v4.14.1 enabled OK
```
Relevant python package version:
```
ipydatawidgets==4.2.0
ipykernel==5.4.3
ipython==7.19.0
ipython-genutils==0.2.0
ipywidgets==7.6.2
jupyter-client==6.1.11
jupyter-core==4.7.0
jupyter-server==1.2.1
jupyterlab==2.2.9
jupyterlab-pygments==0.1.2
jupyterlab-server==1.2.0
jupyterlab-widgets==1.0.0
K3D==2.9.1
nbclient==0.5.1
nbconvert==6.0.7
nbformat==5.0.8
notebook==6.1.6
numpy==1.19.5
traitlets==5.0.5
traittypes==0.2.1
widgetsnbextension==3.5.1
```
|
closed
|
2021-01-13T15:15:18Z
|
2021-02-09T04:37:21Z
|
https://github.com/K3D-tools/K3D-jupyter/issues/255
|
[] |
ghost
| 2
|
ipython/ipython
|
jupyter
| 14,016
|
Get rid of pickle.
|
```python
self.db = PickleShareDB(os.path.join(self.profile_dir.location, 'db'))
```
is like
```python
self.db = exec((Path(self.profile_dir.location) / 'backdoor.py').read_text())
```
, but worse: at least we can audit contents of `backdoor.py` with a text editor.
|
open
|
2023-04-13T21:24:31Z
|
2023-04-17T08:14:14Z
|
https://github.com/ipython/ipython/issues/14016
|
[] |
KOLANICH
| 3
|
pyeventsourcing/eventsourcing
|
sqlalchemy
| 175
|
Latest release v7.1.4 tests fail
|
I've tried to contribute the project to fix some issues and found that unit tests have fail status
Environment: Windows 10 x64, Python 3.7.0
Run command: python -m unittest
Failed tests:
eventsourcing.tests.core_tests.test_events.TestEventWithTimestamp
AssertionError: Decimal('1563540932.143897') not greater than Decimal('1563540932.143897')
eventsourcing.tests.core_tests.test_events.TestEventWithTimestampAndOriginatorID
AssertionError: Decimal('1563540932.144888') not less than Decimal('1563540932.144888')
And sometimes there blinking assertion in test:
eventsourcing.tests.core_tests.test_utils.TestUtils)
AssertionError: Decimal('1563542114.857965') not greater than Decimal('1563542114.857965')
Is it OK?
|
closed
|
2019-07-19T13:01:22Z
|
2019-07-20T02:30:36Z
|
https://github.com/pyeventsourcing/eventsourcing/issues/175
|
[] |
alexanderlarin
| 1
|
yuka-friends/Windrecorder
|
streamlit
| 54
|
使用时ffmpeg占用大量内存
|

如图所示,内存总占用约 10G,请问这是正常现象吗?
主屏幕是4K屏。
(~~顺便请问多屏开发的优先级高么~~)
|
closed
|
2023-12-03T05:59:04Z
|
2024-06-26T16:28:36Z
|
https://github.com/yuka-friends/Windrecorder/issues/54
|
[
"bug",
"P1"
] |
ASC8384
| 3
|
iperov/DeepFaceLab
|
deep-learning
| 512
|
OOM but i got enough memory
|
Running trainer.
Loading model...
Model first run.
Enable autobackup? (y/n ?:help skip:n) : y
Write preview history? (y/n ?:help skip:n) : n
Target iteration (skip:unlimited/default) : n
0
Batch_size (?:help skip:0) : 0
Feed faces to network sorted by yaw? (y/n ?:help skip:n) : n
Flip faces randomly? (y/n ?:help skip:y) : y
Src face scale modifier % ( -30...30, ?:help skip:0) : 0
Use lightweight autoencoder? (y/n, ?:help skip:n) : n
Use pixel loss? (y/n, ?:help skip: n/default ) : n
Using TensorFlow backend.
Loading: 100%|########################################################################| 65/65 [00:00<00:00, 181.68it/s]
Loading: 100%|####################################################################| 3688/3688 [00:16<00:00, 228.42it/s]
============== Model Summary ===============
== ==
== Model name: H64 ==
== ==
== Current iteration: 0 ==
== ==
==------------ Model Options -------------==
== ==
== autobackup: True ==
== sort_by_yaw: False ==
== random_flip: True ==
== lighter_ae: False ==
== pixel_loss: False ==
== batch_size: 4 ==
== ==
==-------------- Running On --------------==
== ==
== Device index: 0 ==
== Name: GeForce GTX 1050 Ti ==
== VRAM: 4.00GB ==
== ==
============================================
Starting. Press "Enter" to stop training and save model.
[03:00:51][#000001][10.98s][2.7535][2.8256]
Error: OOM when allocating tensor with shape[2048,1024,3,3] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node training/Adam/gradients/model_1_1/conv2d_5/convolution_grad/Conv2DBackpropFilter}} = Conv2DBackpropFilter[T=DT_FLOAT, _class=["loc:@training/Adam/gradients/AddN_25"], data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](training/Adam/gradients/model_1_1/conv2d_5/convolution_grad/Conv2DBackpropFilter-0-TransposeNHWCToNCHW-LayoutOptimizer, ConstantFolding/training/Adam/gradients/model_1_1/conv2d_5/convolution_grad/ShapeN-matshapes-1, training/Adam/gradients/AddN_22)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
Traceback (most recent call last):
File "C:\Users\*****\Downloads\deepfacelab\DeepFaceLab_CUDA_9.2_SSE\_internal\DeepFaceLab\mainscripts\Trainer.py", line 109, in trainerThread
iter, iter_time = model.train_one_iter()
File "C:\Users\*****\Downloads\deepfacelab\DeepFaceLab_CUDA_9.2_SSE\_internal\DeepFaceLab\models\ModelBase.py", line 525, in train_one_iter
losses = self.onTrainOneIter(sample, self.generator_list)
File "C:\Users\*****\Downloads\deepfacelab\DeepFaceLab_CUDA_9.2_SSE\_internal\DeepFaceLab\models\Model_H64\Model.py", line 89, in onTrainOneIter
total, loss_src_bgr, loss_src_mask, loss_dst_bgr, loss_dst_mask = self.ae.train_on_batch( [warped_src, target_src_full_mask, warped_dst, target_dst_full_mask], [target_src, target_src_full_mask, target_dst, target_dst_full_mask] )
File "C:\Users\*****\Downloads\deepfacelab\DeepFaceLab_CUDA_9.2_SSE\_internal\python-3.6.8\lib\site-packages\keras\engine\training.py", line 1217, in train_on_batch
outputs = self.train_function(ins)
File "C:\Users\*****\Downloads\deepfacelab\DeepFaceLab_CUDA_9.2_SSE\_internal\python-3.6.8\lib\site-packages\keras\backend\tensorflow_backend.py", line 2715, in __call__
return self._call(inputs)
File "C:\Users\*****\Downloads\deepfacelab\DeepFaceLab_CUDA_9.2_SSE\_internal\python-3.6.8\lib\site-packages\keras\backend\tensorflow_backend.py", line 2675, in _call
fetched = self._callable_fn(*array_vals)
File "C:\Users\*****\Downloads\deepfacelab\DeepFaceLab_CUDA_9.2_SSE\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1439, in __call__
run_metadata_ptr)
File "C:\Users\*****\Downloads\deepfacelab\DeepFaceLab_CUDA_9.2_SSE\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 528, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[2048,1024,3,3] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node training/Adam/gradients/model_1_1/conv2d_5/convolution_grad/Conv2DBackpropFilter}} = Conv2DBackpropFilter[T=DT_FLOAT, _class=["loc:@training/Adam/gradients/AddN_25"], data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](training/Adam/gradients/model_1_1/conv2d_5/convolution_grad/Conv2DBackpropFilter-0-TransposeNHWCToNCHW-LayoutOptimizer, ConstantFolding/training/Adam/gradients/model_1_1/conv2d_5/convolution_grad/ShapeN-matshapes-1, training/Adam/gradients/AddN_22)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
Done.
Press any key to continue . . .
|
closed
|
2019-12-06T01:05:06Z
|
2019-12-06T17:34:48Z
|
https://github.com/iperov/DeepFaceLab/issues/512
|
[] |
N1el132
| 2
|
huggingface/transformers
|
tensorflow
| 36,254
|
Is T5 model supported with HQQ quantization ? (AttributeError: 'HQQLinear' object has no attribute 'weight')
|
### System Info
- `transformers` version: 4.48.2
- Platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
- Python version: 3.11.11
- Huggingface_hub version: 0.28.1
- Safetensors version: 0.5.2
- Accelerate version: 1.3.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: NO
- mixed_precision: no
- use_cpu: False
- debug: False
- num_processes: 1
- machine_rank: 0
- num_machines: 1
- gpu_ids: all
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: False
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce RTX 4080 SUPER
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I've created and save quantized version like this:
quant_config = HqqConfig(nbits=4, group_size=64)
model = T5EncoderModel.from_pretrained(
'/storage/Models/FLUX.1-dev/',
torch_dtype=torch.bfloat16,
subfolder = "text_encoder_2",
device_map="cuda",
quantization_config=quant_config
)
model.save_pretrained(
"./quantized_pipeline/",
safe_serialization=True # Use safetensors format
)
During inference I create flux pipeline:
text_encoder_2 = T5EncoderModel.from_pretrained(
"./quantized_pipeline/",
subfolder="text_encoder_2",
torch_dtype=torch.bfloat16,
device_map="cuda"
)
self.pipeline: FluxPipeline = FluxPipeline.from_pretrained(
self.model_config.path,
torch_dtype=torch.bfloat16,
local_files_only=True,
text_encoder_2=text_encoder_2
)
But when I actually start inference I always get this error:
File "/home/szwagros/anaconda3/envs/image/lib/python3.11/site-packages/accelerate/hooks.py", line 170, in new_forward
output = module._old_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/szwagros/anaconda3/envs/image/lib/python3.11/site-packages/transformers/models/t5/modeling_t5.py", line 339, in forward
forwarded_states = self.DenseReluDense(forwarded_states)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/szwagros/anaconda3/envs/image/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/szwagros/anaconda3/envs/image/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/szwagros/anaconda3/envs/image/lib/python3.11/site-packages/accelerate/hooks.py", line 170, in new_forward
output = module._old_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/szwagros/anaconda3/envs/image/lib/python3.11/site-packages/transformers/models/t5/modeling_t5.py", line 316, in forward
isinstance(self.wo.weight, torch.Tensor)
^^^^^^^^^^^^^^
File "/home/szwagros/anaconda3/envs/image/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1931, in __getattr__
raise AttributeError(
AttributeError: 'HQQLinear' object has no attribute 'weight'
Is it because T5 is not supported or am I doing something wrong ?
### Expected behavior
Flux pipeline should work without errors.
|
open
|
2025-02-18T11:42:54Z
|
2025-03-04T22:30:12Z
|
https://github.com/huggingface/transformers/issues/36254
|
[
"bug"
] |
szwagros
| 6
|
tableau/server-client-python
|
rest-api
| 624
|
Can we update tableau connection string via tableauserverclient api?
|
Hello All,
My use-case requires me to update the connections of all those workbooks/datasources that are connected to certain Redshift databases on tableau server.
I tested a script leveraging tableauserverclient API to update the credentials of published workbooks/datasources connections pointing to those Redshift databases and was able to update it successfully.
After the credentials update, workbooks published using following connection methods:
1. Standard user-password connection: These workbooks worked fine after the script update.
2. ODBC Connection or Redshift connection with custom driver parameters: Although the script updated their username and password details, they failed to connect to the data source.
Our objective is to avoid access to Redshift using individual credentials and enable via IAM profile and Redshift SSO feature based on user's permissions in the architecture.
Since Tableau doesn't support SSO connectivity to Redshift, we devised a strategy to allow users to create workbooks using method #2 and then update the connection details as soon as their workbooks are published on server with method #1 where credentials used in #1 won't be shared with anyone except the admin team.
In order to find out why option #1 works but #2 fails miserably, I compared their connection entries in "public.data_connections" table of the Tableau Server database and observed difference in "Keychain" column of the mentioned table. The custom driver parameters embedded in publishing the workbook (using #2) are stored in keychain column as value of the "odbc-connect-string-extras" field whereas same field is empty for option #1:
Keychain column for option #1:
{odbc-connect-string-extras: '';
server: server1.redshift.amazonaws.com, dbname: test_database,
port: 5439, class: redshift, username: tableau_user}
**Keychain column for option #2:**
{odbc-connect-string-extras: DbUser=tableauuser; DbGroups=tableaugroup; Iam=1; AutoCreate=true;Profile=default;
server: server1.redshift.amazonaws.com, dbname: test_database,
port: 5439, class: redshift, username: tableau_user}
Since we do not want to provide access to Redshift using individual credentials, please let us know if there is an alternate approach for achieving this?
OR;
Is there any method or workaround for updating connection to use user-password (after being published on server) instead of using values from "odbc-connect-string-extras" field in keychain column?
Versions used in testing:
Tableau Server version: 2019.2
Tableau desktop version: 2019.2.3
Password update script reference:
[API Reference](https://tableau.github.io/server-client-python/docs/api-ref#workbooksupdate_connection)
|
closed
|
2020-05-19T09:05:03Z
|
2024-09-19T21:49:08Z
|
https://github.com/tableau/server-client-python/issues/624
|
[
"help wanted",
"document-api"
] |
rgandhi2712
| 2
|
globaleaks/globaleaks-whistleblowing-software
|
sqlalchemy
| 3,596
|
More standard statuses for triaging reports
|
### Proposal
More standard statuses for the recipients to use when handling reports, e.g.:
Closed – Completed investigation
Closed – No wrongdoing
Closed – Allocated as grievance
Closed – Not enough evidence to proceed
Allocated for investigation
Allocated for investigation – Escalated to leadership
Allocated for investigation – Escalated to authorities
Or something like that.
### Motivation and context
It would work well for the user/recipient and the whistleblower would be more aware of what is actually going on with their report and the status of it.
|
open
|
2023-09-02T14:42:24Z
|
2023-09-11T14:59:15Z
|
https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3596
|
[
"T: Feature"
] |
danielvaknine
| 2
|
biolab/orange3
|
scikit-learn
| 6,847
|
Problem when import Table module
|
<!--
Thanks for taking the time to report a bug!
If you're raising an issue about an add-on (i.e., installed via Options > Add-ons), raise an issue in the relevant add-on's issue tracker instead. See: https://github.com/biolab?q=orange3
To fix the bug, we need to be able to reproduce it. Please answer the following questions to the best of your ability.
-->
After install Orange thorught pip when try to import _**from Orange.data import Table**_ it launchs the following error:
```ModuleNotFoundError: No module named 'pandas.core.arrays.sparse.dtype'```
File /usr/local/lib64/python3.9/site-packages/Orange/__init__.py:4
1 # This module is a mixture of imports and code, so we allow import anywhere
2 # pylint: disable=wrong-import-position,wrong-import-order
----> 4 from Orange import data
6 from .misc.lazy_module import _LazyModule
7 from .misc.datasets import _DatasetInfo
File /usr/local/lib64/python3.9/site-packages/Orange/data/__init__.py:12
10 from .io import *
11 from .filter import *
---> 12 from .pandas_compat import *
13 from .aggregate import *
File /usr/local/lib64/python3.9/site-packages/Orange/data/pandas_compat.py:9
7 import pandas as pd
8 from pandas.core.arrays import SparseArray
----> 9 from pandas.core.arrays.sparse.dtype import SparseDtype
10 from pandas.api.types import (
11 is_categorical_dtype, is_object_dtype,
12 is_datetime64_any_dtype, is_numeric_dtype, is_integer_dtype
13 )
15 from Orange.data import (
16 Table, Domain, DiscreteVariable, StringVariable, TimeVariable,
17 ContinuousVariable,
18 )
|
closed
|
2024-07-04T09:20:30Z
|
2024-09-11T12:01:26Z
|
https://github.com/biolab/orange3/issues/6847
|
[
"bug report"
] |
Isaamarod
| 5
|
Anjok07/ultimatevocalremovergui
|
pytorch
| 1,009
|
ultimate vocal remover : incompatible library version
|
mac mini
intel i5 3Ghz 6core
Big Sur 11.7.0
ram: 16 gb
Hi, here is my problem.. hopefully you can help me
I've installed Ultimate vocal remover but it doesn't run at all. The fist time I opened it after installation, pops out the window : "this software it's from an unidentified developer, still want to open?", then i clicked "open" and then went to "security and privacy" to allow the UVR, but the software goes in loading for a while and then nothing happens. If I try to open it again, still nothing.
I've also tried to run these commands in terminal but still nothing happens:
sudo spctl --master-disable
sudo xattr -rd com.apple.quarantine /Applications/Ultimate\ Vocal\ Remover.app
If i do this procedure:
-Go to the Applications folder.
Right-click the Ultimate Vocal Remover app and go to "Show contents"
Go to Contents->MacOS
Open "UVR" and a terminal window should popup
Provide the terminal output here.-
the outcome is this one:
_______________
The default interactive shell is now zsh.
To update your account to use zsh, please run `chsh -s /bin/zsh`.
For more details, please visit https://support.apple.com/kb/HT208050.
/Applications/Ultimate\ Vocal\ Remover.app/Contents/MacOS/UVR ; exit;
Mini-di-Regia:~ regia$ /Applications/Ultimate\ Vocal\ Remover.app/Contents/MacOS/UVR ; exit;
Traceback (most recent call last):
File "UVR.py", line 8, in <module>
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "librosa/__init__.py", line 209, in <module>
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "librosa/core/__init__.py", line 5, in <module>
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "librosa/core/convert.py", line 7, in <module>
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "librosa/core/notation.py", line 8, in <module>
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "librosa/util/__init__.py", line 78, in <module>
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "librosa/util/files.py", line 11, in <module>
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "pooch/__init__.py", line 19, in <module>
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "pooch/processors.py", line 14, in <module>
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "lzma.py", line 27, in <module>
ImportError: dlopen(/Applications/Ultimate Vocal Remover.app/Contents/Frameworks/lib-dynload/_lzma.cpython-311-darwin.so, 2): Library not loaded: @rpath/liblzma.5.dylib
Referenced from: /Applications/Ultimate Vocal Remover.app/Contents/Frameworks/lib-dynload/_lzma.cpython-311-darwin.so
Reason: Incompatible library version: _lzma.cpython-311-darwin.so requires version 10.0.0 or later, but liblzma.5.dylib provides version 8.0.0
[2193] Failed to execute script 'UVR' due to unhandled exception: dlopen(/Applications/Ultimate Vocal Remover.app/Contents/Frameworks/lib-dynload/_lzma.cpython-311-darwin.so, 2): Library not loaded: @rpath/liblzma.5.dylib
Referenced from: /Applications/Ultimate Vocal Remover.app/Contents/Frameworks/lib-dynload/_lzma.cpython-311-darwin.so
Reason: Incompatible library version: _lzma.cpython-311-darwin.so requires version 10.0.0 or later, but liblzma.5.dylib provides version 8.0.0
[2193] Traceback:
Traceback (most recent call last):
File "UVR.py", line 8, in <module>
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "librosa/__init__.py", line 209, in <module>
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "librosa/core/__init__.py", line 5, in <module>
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "librosa/core/convert.py", line 7, in <module>
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "librosa/core/notation.py", line 8, in <module>
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "librosa/util/__init__.py", line 78, in <module>
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "librosa/util/files.py", line 11, in <module>
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "pooch/__init__.py", line 19, in <module>
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "pooch/processors.py", line 14, in <module>
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "lzma.py", line 27, in <module>
ImportError: dlopen(/Applications/Ultimate Vocal Remover.app/Contents/Frameworks/lib-dynload/_lzma.cpython-311-darwin.so, 2): Library not loaded: @rpath/liblzma.5.dylib
Referenced from: /Applications/Ultimate Vocal Remover.app/Contents/Frameworks/lib-dynload/_lzma.cpython-311-darwin.so
Reason: Incompatible library version: _lzma.cpython-311-darwin.so requires version 10.0.0 or later, but liblzma.5.dylib provides version 8.0.0
logout
Saving session...
...copying shared history...
...saving history...truncating history files...
...completed.
_________________
The error seems to be an imcompatible library version. Would it work to update the library version, how do I do that?
Thanks for your availability
looking forward to your reply
|
open
|
2023-12-06T09:59:36Z
|
2023-12-06T10:34:05Z
|
https://github.com/Anjok07/ultimatevocalremovergui/issues/1009
|
[] |
GlobeAudio
| 0
|
strawberry-graphql/strawberry
|
asyncio
| 3,635
|
Misleading static type for "execution result" attribute (introduced in `0.240.0`)
|
I think https://github.com/strawberry-graphql/strawberry/pull/3554 introduce a static typing bug in the execution result attribute.
It's statically typed as [graphql.ExecutionResult](https://github.com/graphql-python/graphql-core/blob/26701397d84338a42c7acbce78368ae8f9d97271/src/graphql/execution/incremental_publisher.py#L65-L128) type from `graphql-core`:
https://github.com/strawberry-graphql/strawberry/blob/294114689c4a10de4924fa516538ffe797130a43/strawberry/types/execution.py#L25
https://github.com/strawberry-graphql/strawberry/blob/294114689c4a10de4924fa516538ffe797130a43/strawberry/types/execution.py#L56
But in reality it's this custom dataclass from Strawberry's codebase (at least in some situations):
https://github.com/strawberry-graphql/strawberry/blob/294114689c4a10de4924fa516538ffe797130a43/strawberry/types/execution.py#L92-L108
PS: The specific problem is the typechecker makes us think it's okay to access the `formatted` property, which is actually missing:
https://github.com/graphql-python/graphql-core/blob/26701397d84338a42c7acbce78368ae8f9d97271/src/graphql/execution/incremental_publisher.py#L97-L105
|
open
|
2024-09-19T18:37:32Z
|
2025-03-20T15:56:52Z
|
https://github.com/strawberry-graphql/strawberry/issues/3635
|
[
"bug"
] |
kkom
| 0
|
KevinMusgrave/pytorch-metric-learning
|
computer-vision
| 431
|
Add dynamic margin option to ArcFaceLoss
|
As mentioned in this pull request: https://github.com/KevinMusgrave/pytorch-metric-learning/pull/424#issuecomment-1042868787
|
open
|
2022-02-19T10:56:23Z
|
2022-02-19T10:56:23Z
|
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/431
|
[
"enhancement"
] |
KevinMusgrave
| 0
|
deepfakes/faceswap
|
deep-learning
| 621
|
Tensorflow currently has no official prebuild for your CUDA
|
I had an issue with setup.py ( the following )
INFO CUDA version: 9.0
INFO cuDNN version: 7.0.4
WARNING Tensorflow currently has no official prebuild for your CUDA, cuDNN combination.
Either install a combination that Tensorflow supports or build and install your own tensorflow-gpu.
CUDA Version: 9.0
cuDNN Version: 7.0
Help:
Building Tensorflow: https://www.tensorflow.org/install/install_sources
Tensorflow supported versions: https://www.tensorflow.org/install/source#tested_build_configurations
and I was advicsed to just install cudnn 7.4 and then my issue was closed. However in the readme it says not to go above version 9.0.x of cuda and version 7.4 of cudnn requires cuda 10. Can someone attempt to help me here without automatically assuming their suggestion works and closing my bug report?
|
closed
|
2019-02-24T12:37:54Z
|
2019-02-24T12:47:21Z
|
https://github.com/deepfakes/faceswap/issues/621
|
[] |
fetchlister
| 1
|
scanapi/scanapi
|
rest-api
| 159
|
Change HTML report to default
|
Change HTML report to default. Current it is markdown, but HTML is way more popular and our html report is [beautiful](https://github.com/scanapi/scanapi/pull/157) now
Here is where we need to change it: https://github.com/scanapi/scanapi/blob/master/scanapi/settings.py#L11
|
closed
|
2020-06-04T19:49:50Z
|
2020-06-25T10:29:06Z
|
https://github.com/scanapi/scanapi/issues/159
|
[
"Good First Issue",
"Reporter"
] |
camilamaia
| 0
|
google-research/bert
|
tensorflow
| 583
|
BERT has a non deterministic behaviour
|
I am using the BERT implementation in https://github.com/google-research/bert for feature extracting and I have noticed a weird behaviour which I was not expecting: if I execute the program twice on the same text, I get different results. I need to know if this is normal and why this happens in order to treat this fact in one or another way. Why is the reason for this? Aren't neural networks deterministic algorithms?
|
open
|
2019-04-17T08:15:01Z
|
2021-01-11T23:02:56Z
|
https://github.com/google-research/bert/issues/583
|
[] |
RodSernaPerez
| 6
|
strawberry-graphql/strawberry
|
django
| 3,655
|
multipart upload struggle
|
I am trying to make the file upload work and no luck yet
I got back to the example on https://strawberry.rocks/docs/guides/file-upload#sending-file-upload-requests
but just copy past multi file requests from postman returns "Unsupported content type"
<!-- Provide a general summary of the bug in the title above. -->
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Describe the Bug
<!-- A clear and concise description of what the bug is. -->
## System Information
- Operating system: macOS sequoia
- Strawberry version (if applicable):
- latest
## Additional Context
<!-- Add any other relevant information about the problem here. -->
python code
@strawberry.mutation
def read_files(self, files: List[Upload]) -> List[str]:
print(f"Received read_files mutation. Number of files: {len(files)}")
contents = []
for file in files:
content = file.read().decode("utf-8")
contents.append(content)
return contents
curl --location 'localhost:7675/graphql' \
--form 'operations="{ \"query\": \"mutation(\$files: [Upload\!]\!) { readFiles(files: \$files) }\", \"variables\": { \"files\": [null, null] } }"' \
--form 'map="{\"file1\": [\"variables.files.0\"], \"file2\": [\"variables.files.1\"]}"' \
--form 'file1=@"/Users/its/Documents/roll.csv"' \
--form 'file2=@"/Users/its/Documents/dump.csv"'
Request Body
operations: "{ "query": "mutation($files: [Upload!]!) { readFiles(files: $files) }", "variables": { "files": [null, null] } }"
map: "{"file1": ["variables.files.0"], "file2": ["variables.files.1"]}"
file1: undefined
file2: undefined
Response Headers
date: Tue, 01 Oct 2024 15:10:58 GMT
server: uvicorn
content-length: 24
content-type: text/plain; charset=utf-8
Response Body
Unsupported content type
response
Unsupported content type
|
closed
|
2024-10-01T15:59:02Z
|
2025-03-20T15:56:53Z
|
https://github.com/strawberry-graphql/strawberry/issues/3655
|
[
"bug"
] |
itsklimov
| 5
|
coqui-ai/TTS
|
deep-learning
| 3,276
|
[Bug] Multiple speaker requests?
|
### Describe the bug
The [TTS API](https://tts.readthedocs.io/en/latest/models/xtts.html) states that `speaker_wav` can be a list of filepaths for multiple speaker references. But in `def tts_to_file(...)`, `speaker_wav` only accepts a single string.
### To Reproduce
```
tts.tts_to_file(
text="Some test",
file_path="output.wav",
speaker_wav=["training/1.wav"],
language="en",
)
```
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
{
"CUDA": {
"GPU": [],
"available": false,
"version": null
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.1.1",
"TTS": "0.20.6",
"numpy": "1.26.2"
},
"System": {
"OS": "Darwin",
"architecture": [
"64bit",
""
],
"processor": "arm",
"python": "3.11.6",
"version": "Darwin Kernel Version 22.5.0: Thu Jun 8 22:22:20 PDT 2023; root:xnu-8796.121.3~7/RELEASE_ARM64_T6000"
}
}
```
### Additional context
_No response_
|
closed
|
2023-11-20T18:52:29Z
|
2024-01-10T21:59:41Z
|
https://github.com/coqui-ai/TTS/issues/3276
|
[
"bug",
"wontfix"
] |
mukundt
| 4
|
KevinMusgrave/pytorch-metric-learning
|
computer-vision
| 115
|
Check cuda version when determining max k in Accuracy Calculator
|
```num_k``` is clipped at 1024, but in cuda >= 9.5, it can be 2048.
|
closed
|
2020-06-02T09:15:51Z
|
2021-04-03T00:24:11Z
|
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/115
|
[
"enhancement",
"fixed in dev branch"
] |
KevinMusgrave
| 0
|
flairNLP/flair
|
nlp
| 3,603
|
Result class has param marked as Optional that is required
|
The param `scores` has a type hint of `Optional` and a default value of `None`, but is not allowed to be unset. Perhaps this should not be an optional param. However, changing the order of params could break existing code, and keeping its position requires a default value. Otherwise, classification report must not have a default value which could break existing code as well. This is a low priority issue so it can be closed if there's no good solution here
```
class Result:
def __init__(
self,
main_score: float,
detailed_results: str,
classification_report: Optional[dict] = None,
scores: Optional[dict] = None,
) -> None:
classification_report = classification_report if classification_report is not None else {}
assert scores is not None and "loss" in scores, "No loss provided."
```
|
closed
|
2025-01-27T00:11:07Z
|
2025-03-15T05:35:37Z
|
https://github.com/flairNLP/flair/issues/3603
|
[] |
MattGPT-ai
| 0
|
explosion/spaCy
|
machine-learning
| 13,633
|
CVE in dependency (black==22.3.0)
|
`black==22.3.0` is a dependency and the version is pinned in spaCy's `requirements.txt`. There is a CVE affecting `black` versions prior to `24.3.0`, specifically CVE-2024-21503 (https://nvd.nist.gov/vuln/detail/CVE-2024-21503).
Impact: Although not a run-time vulnerability in most scenarios (unless untrusted code is being processed), it still shows up in security scans that are the norm for any enterprise grade software, thus triggering processes for handling vulnerabilities / exceptions.
Please evaluate what it would take to migrate to the latest version of `black` so this detection would clear up.
<!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
## How to reproduce the behaviour
To reproduce: in our pipeline we are using Wiz for scans, but even a "visual/manual" check in `requirements.txt` in the installed python package will show the reference to `black==22.3.0`.
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System: not relevant (linux based)
* Python Version Used: not relevant (3.8 / 3.9)
* spaCy Version Used: not relevant (at least one of our models uses `3.6.0` but the issue is also affecting `master`)
* Environment Information: not relevant (building various docker based images in linux and/or Windows VMs)
|
open
|
2024-09-25T06:28:28Z
|
2024-11-06T11:24:50Z
|
https://github.com/explosion/spaCy/issues/13633
|
[] |
sstefanov78
| 2
|
tqdm/tqdm
|
jupyter
| 876
|
Using multiple tqdm bars
|
So I'm making a script to scrape data and I need 2 tqdm bars but when I run the script after the first loop the first bar disappears and starts making multiple seconds bars.
Here is a simple recreation of the code I have: (This is not the complete code only a simplified example)
```
from tqdm import tqdm
import time
def test():
for x in tqdm(range(500), desc='2', position=1):
time.sleep(0.01)
for x in tqdm(range(500), desc='1', position=0):
for y in range(500):
test()
time.sleep(0.1)
```
|
closed
|
2020-01-05T19:53:39Z
|
2020-01-14T14:52:22Z
|
https://github.com/tqdm/tqdm/issues/876
|
[
"duplicate 🗐",
"question/docs ‽",
"p2-bug-warning ⚠"
] |
DEADSEC-SECURITY
| 5
|
zappa/Zappa
|
flask
| 1,349
|
Zappa failing due to newer version of setuptools
|
https://github.com/zappa/Zappa/blob/39f75e76d28c1a08d4de6501e6f794fe988cbc98/zappa/core.py#L22
https://github.com/zappa/Zappa/blob/39f75e76d28c1a08d4de6501e6f794fe988cbc98/zappa/core.py#L690
### Env details
```
Python version: 3.10
pip version: 24.2
setuptools version: 75.0.0
```
### Logs
```
File "/_venv/lib/python3.10/site-packages/zappa/core.py", line 690, in create_lambda_zip
copy_tree(temp_package_path, temp_project_path, update=True)
File "/_venv/lib/python3.10/site-packages/setuptools/_distutils/dir_util.py", line 155, in copy_tree
return list(itertools.chain.from_iterable(map(copy_one, names)))
File "/_venv/lib/python3.10/site-packages/setuptools/_distutils/dir_util.py", line 197, in _copy_one
file_util.copy_file(
File "/_venv/lib/python3.10/site-packages/setuptools/_distutils/file_util.py", line 104, in copy_file
from distutils._modified import newer
ModuleNotFoundError: No module named 'distutils._modified'
```
|
open
|
2024-09-16T10:43:54Z
|
2025-02-11T20:33:26Z
|
https://github.com/zappa/Zappa/issues/1349
|
[] |
sridhar562345
| 20
|
mirumee/ariadne
|
api
| 987
|
Fragments not implemented?
|
Any plan of implementing Fragments?
|
closed
|
2022-12-08T15:13:43Z
|
2024-01-12T18:50:06Z
|
https://github.com/mirumee/ariadne/issues/987
|
[
"docs"
] |
simanlaci
| 7
|
xinntao/Real-ESRGAN
|
pytorch
| 766
|
使用RMSprop优化器问题
|
我想把优化器替换为RMSprop,看了basicsr的源码也有实现RMSprop优化器方式,optim_type == 'RMSprop':
optimizer = torch.optim.RMSprop(params, lr, **kwargs)。
但是为什么会出现没有实现的错误:basicsr/models/base_model.py", line 107, in get_optimizer
raise NotImplementedError(f'optimizer {optim_type} is not supperted yet.')
NotImplementedError: optimizer RMSprop is not supperted yet.
试了一下,配置参数只有写Adam才会有效
|
open
|
2024-03-16T03:41:38Z
|
2024-03-16T03:41:38Z
|
https://github.com/xinntao/Real-ESRGAN/issues/766
|
[] |
bujideman1
| 0
|
vimalloc/flask-jwt-extended
|
flask
| 85
|
How to specify audience?
|
Hello,
We are consuming a JWT token from an AWS Cognito Identity Pool that has an "aud" claim specified for the audience. The base JWT decode function is throwing an `InvalidAudienceException` because there is an audience specified in the token but not in the configuration/call to `jwt.decode()`.
Is there a way to specify this in configuration?
|
closed
|
2017-09-18T16:48:30Z
|
2019-08-19T23:41:22Z
|
https://github.com/vimalloc/flask-jwt-extended/issues/85
|
[] |
psyvision
| 6
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.