repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
streamlit/streamlit
|
python
| 10,820
|
Add `width` & `height` support to `st.table`
|
### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
Add `width` & `height` parameters to `st.table` and activate scrolling if it overflows.
### Why?
`st.table` shows all rows on the page which gets very inconvenient for larger tables. Having a way to define the `height` and activate scrolling if the content overflows would enable `st.table` to be used for more usecases.
### How?
```python
st.table(..., width: int | None = None, height: int | None = None)
```
### Additional Context
- Related https://github.com/streamlit/streamlit/issues/10775
- The table currently tries to squeeze everything into the available width:

|
open
|
2025-03-18T11:02:48Z
|
2025-03-18T11:18:23Z
|
https://github.com/streamlit/streamlit/issues/10820
|
[
"type:enhancement",
"feature:st.table"
] |
lukasmasuch
| 1
|
thp/urlwatch
|
automation
| 661
|
Running a subset of jobs does not work?
|
Attempting to run the `urlwatch 2 4 7` example from the docs gives `urlwatch: error: unrecognized arguments: 2 4 7` on my urlwatch 2.23. Am I doing something wrong (I do have more than 7 jobs)?
|
closed
|
2021-08-08T20:30:44Z
|
2022-03-04T18:11:50Z
|
https://github.com/thp/urlwatch/issues/661
|
[] |
Filip-K
| 4
|
ContextLab/hypertools
|
data-visualization
| 137
|
feature request: add describe support for new reduction models
|
[This](https://github.com/ContextLab/hypertools/pull/136) pull request adds support for a wide range of new data reduction models. However, there aren't equivalent `describe` methods like [this one](http://hypertools.readthedocs.io/en/latest/hypertools.tools.describe_pca.html#hypertools.tools.describe_pca) for those new models.
I propose changing the name of `hyp.tools.describe_pca` to `hyp.tools.describe`, and then adding a `model` flag to specify the reduction model (default: either PCA or IncrementalPCA depending on how we resolve [this issue](https://github.com/ContextLab/hypertools/issues/134)).
|
closed
|
2017-06-14T12:09:04Z
|
2017-10-22T01:22:35Z
|
https://github.com/ContextLab/hypertools/issues/137
|
[
"enhancement",
"easy(ish)"
] |
jeremymanning
| 1
|
graphql-python/graphene-django
|
graphql
| 1,192
|
Want to use dynamic schema generation by passing table name
|
Also want to use rest above graphql
Means will pass the table name in URL ,it will dynamic generate the schema for that
Like Bloomberg in URL Bloomberg schema and Markit in URL Markit schema
How we can achieve this??
|
open
|
2021-04-28T16:37:05Z
|
2021-04-28T16:37:05Z
|
https://github.com/graphql-python/graphene-django/issues/1192
|
[
"✨enhancement"
] |
PriyatamNayak
| 0
|
modoboa/modoboa
|
django
| 2,259
|
MySQL/MariaDB localhost / 127.0.0.1 config change request
|
Impacted versions: all (?)
OS Type: Debian
OS Version: 10
Database: MariaDB
Manual installation. Over time various upgrades.
https://modoboa.readthedocs.io/en/latest/upgrade.html describes the process to upgrade core and extensions.
It might be that the following is advised/need to run:
`python manage.py generate_postfix_maps --destdir <directory>`
This produces SQL scripts to be used by Postfix. BUT it uses 'localhost' by default as the database-host/server. In a chroot-ed environment such as the default Debian Postfix install this causes problems as now Postfix tries to access MySQL/MariaDB using a socket. The socket file cannot be used. For details also see: https://workaround.org/ispmail/jessie/postfix-mysql
I think a better approach would be to use 127.0.0.1 instead or make this configurable/tweakable.
In the current setup a checksum file is generated and explicitly mentioned is not to edit the SQL script files. I tried to honor this and tried to enforce TCP connections instead of socket by adding transport=TCP to the [client] stanza in the MariaDB config, but this gets ignored. Therefor the only option seemed to edit the SQL files which I did. Drawback is that the checksum file now no longer matches the files.
|
closed
|
2021-06-09T13:28:54Z
|
2021-08-30T19:55:13Z
|
https://github.com/modoboa/modoboa/issues/2259
|
[
"stale"
] |
olaf7
| 3
|
marcomusy/vedo
|
numpy
| 671
|
Using vedo within WSL (Windows Subsystem for Linux)
|
Hello,
I'm trying to use vedo within WSL. I have an issue with the function "show". When I try to show any 3D model, I get a very simple error message: "Aborted". Note I am using XMING as a x11 server to forward the plot GUI to my Windows system. Using XMING, I am able to use Matplotlib without any issues. Any clue what might be happening?
Thanks,
Rafael.
|
closed
|
2022-07-19T17:08:11Z
|
2023-10-18T13:17:41Z
|
https://github.com/marcomusy/vedo/issues/671
|
[] |
rafaelmarch3
| 4
|
tortoise/tortoise-orm
|
asyncio
| 1,130
|
pydantic_model_creator cannot create two interfaces for the same model
|
Prototypes = pydantic_model_creator(models.Prototypes)
NewPrototype = pydantic_model_creator(models.Prototypes, exclude=('id', 'created'))
When pydantic_model_creator is called a second time for Prototypes, pydantic_model_creator directly returns the result of the first call stored in **_MODEL_INDEX**.
`_MODEL_INDEX={'app.models.Prototypes.leaf': <class 'abc.app.models.Prototypes.leaf'>}
`
Code Position: tortoise/contrib/pydantic/creator.py 397 line
` # Here we de-dup to ensure that a uniquely named object is a unique object
# This fixes some Pydantic constraints.
if _name in _MODEL_INDEX:
return _MODEL_INDEX[_name]
`
|
closed
|
2022-05-20T01:43:23Z
|
2022-05-20T06:07:13Z
|
https://github.com/tortoise/tortoise-orm/issues/1130
|
[] |
StevenLianaL
| 1
|
seleniumbase/SeleniumBase
|
web-scraping
| 2,809
|
Feature Request: Allow `uc.click()` to accept `WebElement` element objects directly
|
As a user of SeleniumBase, I would like the `uc.click()` function to accept not only xpaths and css selectors, but also WebElement objects as well. This functionality would simplify coding by not needing to reference the location of an element twice, especially in cases where an element's location needs to be searched for or obtained via another function.
|
closed
|
2024-05-27T15:50:41Z
|
2024-05-29T02:55:53Z
|
https://github.com/seleniumbase/SeleniumBase/issues/2809
|
[
"duplicate",
"workaround exists",
"UC Mode / CDP Mode"
] |
cdchris12
| 1
|
flasgger/flasgger
|
flask
| 62
|
AttributeError: 'function' object has no attribute 'swag_path'
|
When doing something like:
```python
@op_blueprint.route('/auth', methods=['GET', 'POST'])
@swag_from('docs/auth_get.yml', methods=['GET'])
@swag_from('docs/auth_post.yml', methods=['POST'])
def auth():
"""The authorization endpoint."""
# do stuff ...
```
I see:
```
File "/home/lwm/wages/fsfe/oidcp/.venv/lib/python3.5/site-packages/flask/cli.py", line 231, in load_app
rv = locate_app(self.app_import_path)
File "/home/lwm/wages/fsfe/oidcp/.venv/lib/python3.5/site-packages/flask/cli.py", line 90, in locate_app
__import__(module)
File "/home/lwm/wages/fsfe/oidcp/wsgi.py", line 7, in <module>
from oidcp.app import create_app
File "/home/lwm/wages/fsfe/oidcp/oidcp/app.py", line 22, in <module>
from oidcp.views import op_blueprint
File "/home/lwm/wages/fsfe/oidcp/oidcp/views.py", line 30, in <module>
@swag_from('../swagger_docs/auth/auth_post.yml', methods=['POST'])
File "/home/lwm/wages/fsfe/oidcp/.venv/lib/python3.5/site-packages/flasgger/utils.py", line 71, in decorator
'filepath': function.swag_path,
AttributeError: 'function' object has no attribute 'swag_path'
```
Where the function has the `swag_paths` attribute (from my digging around).
|
closed
|
2017-03-28T10:19:10Z
|
2017-03-29T21:18:30Z
|
https://github.com/flasgger/flasgger/issues/62
|
[
"bug"
] |
decentral1se
| 5
|
vitalik/django-ninja
|
pydantic
| 724
|
ImportError: cannot import name 'Depends' from 'ninja'
|
I am trying to import
`from ninja import Depends`
I get
`ImportError: cannot import name 'Depends' from 'ninja'`
But I see in the `motivation.md `and comments in late 2022 that still uses `Depends`, has it been updated?
|
closed
|
2023-03-31T14:54:49Z
|
2024-04-15T10:02:36Z
|
https://github.com/vitalik/django-ninja/issues/724
|
[] |
magedhelmy1
| 4
|
explosion/spaCy
|
nlp
| 12,383
|
Training transformer model goes from score 0.97 to ZERO
|
### Discussed in https://github.com/explosion/spaCy/discussions/12301
<div type='discussions-op-text'>
<sup>Originally posted by **mbrunecky** February 18, 2023</sup>
I am training NER using transformer model.
On one of my data sets, during epoch 2, the score reaches 0.97 and then (after a huge loss) drops to ZERO, where it stays until the process dies with an out-of-memory error.
What I should I be looking for as the reason for this behavior?
```
02/18-02:52:32.282 ============================= Training pipeline =============================[0m
02/18-02:52:32.282 [i] Pipeline: ['transformer', 'ner', 'doc_cleaner']
02/18-02:52:32.282 [i] Initial learn rate: 0.0
02/18-02:52:32.282 E # LOSS TRANS... LOSS NER ENTS_F ENTS_P ENTS_R SCORE
02/18-02:52:32.282 --- ------ ------------- -------- ------ ------ ------ ------
02/18-02:53:26.942 0 0 741.03 842.20 0.83 0.44 6.68 0.03
02/18-03:00:53.389 0 800 35387.67 131378.27 92.45 91.63 93.28 0.93
02/18-03:08:21.388 0 1600 846.64 93264.55 92.85 92.78 92.91 0.93
02/18-03:15:56.981 0 2400 5107.06 68810.17 94.86 95.75 93.99 0.95
02/18-03:23:40.199 0 3200 23586.03 35748.45 95.69 96.39 95.01 0.96
02/18-03:31:42.270 0 4000 3324.74 10904.08 95.27 95.47 95.08 0.95
02/18-03:40:10.199 1 4800 69579.98 3293.41 95.71 95.29 96.13 0.96
02/18-03:49:08.304 1 5600 15203.48 1351.42 96.14 96.01 96.27 0.96
02/18-03:58:35.240 1 6400 5012.19 1022.37 96.19 96.33 96.06 0.96
02/18-04:08:44.572 1 7200 2621.33 943.09 95.85 95.30 96.40 0.96
02/18-04:19:21.697 1 8000 2262.92 829.70 96.75 97.13 96.37 0.97
02/18-04:31:10.735 1 8800 10229.21 982.74 95.90 97.48 94.37 0.96
02/18-04:43:10.557 2 9600 29553.29 1354.11 96.03 95.29 96.78 0.96
02/18-04:56:31.975 2 10400 3775.07 824.47 96.61 97.12 96.10 0.97
02/18-05:10:22.435 2 11200 2795971.49 12601.45 0.00 0.00 0.00 0.00
02/18-05:25:14.185 2 12000 513981.72 22502.53 0.00 0.00 0.00 0.00
02/18-05:40:56.915 2 12800 40347.06 18249.37 0.00 0.00 0.00 0.00
02/18-05:59:26.751 2 13600 34795.68 18328.94 0.00 0.00 0.00 0.00
02/18-06:18:05.600 3 14400 32507.22 19082.38 0.00 0.00 0.00 0.00
02/18-06:37:15.405 3 15200 27791.56 18447.91 0.00 0.00 0.00 0.00
02/18-06:57:16.382 3 16000 25837.16 18390.90 0.00 0.00 0.00 0.00
02/18-06:57:26.490 [+] Saved pipeline to output directory
02/18-06:59:28.779 Invoked train_run_004:: process finished, exit value=-1073741571 (0xc00000fd)
```
Configuration:
```
[paths]
train = "L:\\training\\CA\\PLACER\\FEB23\\DMOD\\train"
dev = "L:\\training\\CA\\PLACER\\FEB23\\DMOD\\tval"
vectors = null
init_tok2vec = null
[system]
gpu_allocator = "pytorch"
seed = 0
[nlp]
lang = "en"
pipeline = ["transformer","ner","doc_cleaner"]
batch_size = 80
disabled = []
after_creation = null
after_pipeline_creation = null
tokenizer = {"@tokenizers":"spacy.Tokenizer.v1"}
[nlp.before_creation]
@callbacks = "adjust_stop_words"
add_stop_words = []
rem_stop_words = ["amount","and","as","at","between","by","eight","eleven","each","except","fifteen","fifty","first","five","for","formerly","forty","four","hereby","herein","nine","of","six","sixty","ten","third","three","to","twelve","twenty","two"]
debug = true
[components]
[components.doc_cleaner]
factory = "doc_cleaner"
silent = true
[components.doc_cleaner.attrs]
tensor = null
_.trf_data = null
[components.ner]
factory = "ner"
incorrect_spans_key = null
moves = null
scorer = {"@scorers":"spacy.ner_scorer.v1"}
update_with_oracle_cut_size = 128
[components.ner.model]
@architectures = "spacy.TransitionBasedParser.v2"
state_type = "ner"
extra_state_tokens = false
hidden_width = 80
maxout_pieces = 2
use_upper = false
nO = null
[components.ner.model.tok2vec]
@architectures = "spacy-transformers.TransformerListener.v1"
grad_factor = 1.0
pooling = {"@layers":"reduce_mean.v1"}
upstream = "*"
[components.transformer]
factory = "transformer"
max_batch_items = 2048
set_extra_annotations = {"@annotation_setters":"spacy-transformers.null_annotation_setter.v1"}
[components.transformer.model]
@architectures = "spacy-transformers.TransformerModel.v3"
name = "roberta-base"
mixed_precision = true
[components.transformer.model.get_spans]
@span_getters = "spacy-transformers.strided_spans.v1"
window = 128
stride = 80
[components.transformer.model.grad_scaler_config]
[components.transformer.model.tokenizer_config]
use_fast = true
[components.transformer.model.transformer_config]
[corpora]
[corpora.dev]
@readers = "spacy.Corpus.v1"
path = ${paths.dev}
max_length = 0
gold_preproc = true
limit = 0
augmenter = null
[corpora.train]
@readers = "spacy.Corpus.v1"
path = ${paths.train}
max_length = 0
gold_preproc = true
limit = 0
augmenter = null
[training]
accumulate_gradient = 3
dev_corpus = "corpora.dev"
train_corpus = "corpora.train"
seed = ${system.seed}
gpu_allocator = ${system.gpu_allocator}
dropout = 0.1
patience = 8000
max_epochs = 0
max_steps = 32000
eval_frequency = 800
frozen_components = []
before_to_disk = null
annotating_components = []
[training.batcher]
@batchers = "spacy.batch_by_padded.v1"
discard_oversize = true
size = 1536
buffer = 256
get_length = null
[training.logger]
@loggers = "spacy.ConsoleLogger.v1"
progress_bar = false
[training.optimizer]
@optimizers = "Adam.v1"
beta1 = 0.9
beta2 = 0.999
L2_is_weight_decay = true
L2 = 0.01
grad_clip = 1.0
use_averages = false
eps = 0.00000001
[training.optimizer.learn_rate]
@schedules = "warmup_linear.v1"
warmup_steps = 250
total_steps = 32000
initial_rate = 0.00005
[training.score_weights]
ents_f = 0.5
ents_p = 0.2
ents_r = 0.3
ents_per_type = null
[pretraining]
[initialize]
vectors = null
init_tok2vec = null
vocab_data = null
lookups = null
before_init = null
after_init = null
[initialize.components]
[initialize.tokenizer]
```</div>
|
open
|
2023-03-08T08:55:36Z
|
2023-03-08T08:55:36Z
|
https://github.com/explosion/spaCy/issues/12383
|
[
"bug",
"feat / ner",
"perf / memory",
"feat / training",
"feat / transformer"
] |
svlandeg
| 0
|
statsmodels/statsmodels
|
data-science
| 8,778
|
DOC: period in seasonal decompose is confusing, should be periodicity.
|
`period` sounds like it's the unit for one datetime or time period observation. as used in pandas
AFAIU, seasonal decompose needs length of a cycle, periodicity
I got confused looking at the seasonal decompose docstring to comment on
https://stackoverflow.com/questions/75954402/seasonal-decompose-you-must-specify-a-period-or-x-must-be-a-pandas-object-with
|
closed
|
2023-04-07T01:52:05Z
|
2023-04-26T11:57:51Z
|
https://github.com/statsmodels/statsmodels/issues/8778
|
[
"comp-tsa",
"comp-docs"
] |
josef-pkt
| 2
|
SciTools/cartopy
|
matplotlib
| 2,103
|
fill function paint outside and not inside the polygon
|
I have a odd behaviour since it is the ouside of the polygon that is filled.
```
fig = plt.figure()
x, y = [-44, -44, 45, 45, -44], [-45, 80, 80, -45, -45]
ax = fig.add_subplot(1, 1, 1, projection=ccrs.Robinson(10))
ax.coastlines()
ax.plot(x, y, marker='o', transform=ccrs.Geodetic())
ax.fill(x, y, color='coral', transform=ccrs.Geodetic(), alpha=0.8)
ax.gridlines()
ax.set_global()
plt.show()
```
Am I missing something ?

tested with cartopy 0.21.0
|
open
|
2022-11-17T19:35:27Z
|
2024-05-13T09:10:40Z
|
https://github.com/SciTools/cartopy/issues/2103
|
[
"Bug: transforming filled paths"
] |
PBrockmann
| 4
|
pytest-dev/pytest-xdist
|
pytest
| 891
|
After running tests terminal left in broken state
|
It seems running:
```
pytest -n auto some_tests.py
```
breaks the terminal.
It seems xdisk outputs control characters messing up the terminal. No text is displayed anymore when typing.
I have to issue the `reset` command in order to have a normally functioning terminal again.
os: linux
terminals used: kitty, konsole, xterm ...
pytest-xdist version used: `3.2.1`
|
closed
|
2023-03-23T20:56:58Z
|
2023-04-06T11:00:15Z
|
https://github.com/pytest-dev/pytest-xdist/issues/891
|
[
"needs information"
] |
JelleSmet-TomTom
| 3
|
Johnserf-Seed/TikTokDownload
|
api
| 474
|
[BUG]
|
**描述出现的错误**
对bug的清晰而简洁的描述。
**bug复现**
复现这次行为的步骤:
1.双击TikTokMultiGUI 就出现报错框框
**桌面(请填写以下信息):**
-操作系统:[例如windows11 64bit]
-vpn代理[例如开启、关闭]
-版本[如1.2.3]
**附文**
在此处添加有关此问题的文字。

|
open
|
2023-07-06T15:44:11Z
|
2023-07-06T15:44:11Z
|
https://github.com/Johnserf-Seed/TikTokDownload/issues/474
|
[
"故障(bug)",
"额外求助(help wanted)",
"无效(invalid)"
] |
sss12312
| 0
|
slackapi/bolt-python
|
fastapi
| 668
|
get ordered message events
|
Hi 👋
My use case with slack events is to create a message forwarding tool, basically if you send messages to a channel `a` it will be redirected to channels `x`, `y` and `z` for example, the issue is that if the user in channel `a` sends multiple messages quickly `1`, `2`, `3`, `4`, the other channels will get the messages in a different order like `4`, `2`, `1`, `3` which disrupts context.
I guess this is due to how webhooks work and the congestion that can be hitting the server.
but is there a way to tell the slack api to ensure the order of these messages?
I also saw that slack rtm api could help here, but my app is not a classical one so I wouldn't be able to use the rtm api in this case right or I can combine the events api with the rtm?
Thank you
#### The `slack_bolt` version
1.7.0
#### Python runtime version
3.8.2
|
closed
|
2022-06-09T18:41:58Z
|
2022-06-13T15:48:53Z
|
https://github.com/slackapi/bolt-python/issues/668
|
[
"question"
] |
juliocanares
| 6
|
sammchardy/python-binance
|
api
| 926
|
Dedicated logging (not root)
|
Now some logger records are called directly through the logging (as a root logger).
Most of the calls are in the "[streams.py](https://github.com/sammchardy/python-binance/blob/master/binance/streams.py)" file.
Meanwhile, class "ReconnectingWebsocket" has its own logger (`self._log`), which for some reason is not used.
#### Example
```python
import logging
class ReconnectingWebsocket:
def __init__(...):
self._log = logging.getLogger(__name__)
def foo(...):
logging.debug(f"no message in {self.TIMEOUT} seconds")
```
This case makes hard to use this library logger with other application loggers, cause root level become mixed.
#### Suggestion example:
```python
import logging
class ReconnectingWebsocket:
def __init__(...):
self._log = logging.getLogger(__name__)
def foo(...):
self._log.debug(f"no message in {self.TIMEOUT} seconds")
```
|
closed
|
2021-06-15T07:57:15Z
|
2021-09-07T22:54:30Z
|
https://github.com/sammchardy/python-binance/issues/926
|
[] |
Olegt0rr
| 0
|
BeanieODM/beanie
|
asyncio
| 285
|
Beanie for Testing
|
Hello,
This issue is for consult about the possibility to use Beanie with some sort of MongoDB Mock.
Has Beanie a "native" support for testing like a "Client Mock" or something?. Otherwise, can be use Beanie with a MongoDB in memory?.
|
closed
|
2022-06-10T19:47:45Z
|
2025-02-19T12:34:37Z
|
https://github.com/BeanieODM/beanie/issues/285
|
[] |
agustin-del-pino
| 3
|
supabase/supabase-py
|
flask
| 860
|
missing await in async create_client on 2.5.2 and up
|
# Bug report
## Describe the bug
The async `create_client` function uses an async function (`AsyncClient.create`) without awaiting it, leading to the return of a double-nested coroutine that needs two await calls to get to the client. The signature mismatches the actually returned type.
Users who try using the function according to the signature will receive an unresolved coroutine object instead of an `AsyncClient`, preventing its use.
## To Reproduce
```python
import asyncio
from supabase._async.client import create_client
async def main():
url = ...
key = ...
client = await create_client(url, key, ClientOptions(storage_client_timeout=300))
await client.table('flowers').select('*').execute()
asyncio.run(main())
```
will result in an exception being thrown as the client object is a coroutine:
```
Exception: 'coroutine' object has no attribute 'table'
```
## Expected behavior
The `create_client` function should return a coroutine that resolves to an `AsyncClient`.
|
closed
|
2024-07-16T16:23:39Z
|
2024-07-16T19:10:51Z
|
https://github.com/supabase/supabase-py/issues/860
|
[
"bug"
] |
realChesta
| 4
|
tqdm/tqdm
|
jupyter
| 1,125
|
tqdm.reset() crashes when disable=True
|
- [x] I have marked all applicable categories:
+ [x] exception-raising bug
+ [ ] visual output bug
+ [ ] documentation request (i.e. "X is missing from the documentation." If instead I want to ask "how to use X?" I understand [StackOverflow#tqdm] is more appropriate)
+ [ ] new feature request
- [x] I have visited the [source website], and in particular
read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
- [x] I have mentioned version numbers, operating system and
environment, where applicable:
```python
haendel:~/projects/telekom/trunk/sandbox/tqdm> python3
Python 3.9.1 (default, Jan 8 2021, 17:17:43)
[Clang 12.0.0 (clang-1200.0.32.28)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import tqdm, sys
>>> print(tqdm.__version__, sys.version, sys.platform)
4.56.0 3.9.1 (default, Jan 8 2021, 17:17:43)
[Clang 12.0.0 (clang-1200.0.32.28)] darwin
>>> t = tqdm.tqdm(total=10, disable=True)
>>> t.reset()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Volumes/projects/telekom/trunk/sandbox/tqdm/tqdm/std.py", line 1348, in reset
self.last_print_t = self.start_t = self._time()
AttributeError: 'tqdm' object has no attribute '_time'
```
This issue is similar/related to [Issue 624](https://github.com/tqdm/tqdm/issues/624).
My test above actually uses current master source:
Branch: master
Commit: 94842e1027570867885788dc9d7fe038484f085b
Date: 2021-02-09
```
~/projects/sandbox/tqdm> pip3 list
Package Version
-------------- -------
libxml2-python 2.9.10
pip 21.0.1
~/projects/sandbox/tqdm> pip3 install -e git://github.com/tqdm/tqdm.git@master#egg=tqdm
Obtaining tqdm from git+git://github.com/tqdm/tqdm.git@master#egg=tqdm
Updating ./src/tqdm clone (to revision master)
Running command git fetch -q --tags
Running command git reset --hard -q 94842e1027570867885788dc9d7fe038484f085b
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing wheel metadata ... done
Installing collected packages: tqdm
Running setup.py develop for tqdm
Successfully installed tqdm
haendel:~/projects/telekom/trunk/sandbox/tqdm>pip3 list
Package Version Location
-------------- ------- -----------------------------------------------------
libxml2-python 2.9.10
pip 21.0.1
tqdm 4.56.1 /Volumes/projects/telekom/trunk/sandbox/tqdm/src/tqdm
~/projects/sandbox/tqdm> python3 -c "import tqdm; print(tqdm.__version__)"
4.56.0
```
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
[StackOverflow#tqdm]: https://stackoverflow.com/questions/tagged/tqdm
|
closed
|
2021-02-10T06:05:30Z
|
2021-02-11T05:13:15Z
|
https://github.com/tqdm/tqdm/issues/1125
|
[
"p0-bug-critical ☢",
"to-merge ↰",
"c1-quick 🕐"
] |
michael-koeller
| 1
|
Johnserf-Seed/TikTokDownload
|
api
| 728
|
[BUG]cookie问题 响应一直为空
|

换了好多次cookie,一直响应为空,游客的cookie和已经登陆的cookie都试过了,下载不下来
|
open
|
2024-06-19T02:44:38Z
|
2024-06-28T10:47:27Z
|
https://github.com/Johnserf-Seed/TikTokDownload/issues/728
|
[
"故障(bug)",
"重复(duplicate)",
"已确认(confirmed)"
] |
sakuraIsNow
| 2
|
guohongze/adminset
|
django
| 100
|
项目管理
|
工程信息请加入[port、start_command]字段。
|
open
|
2019-03-07T04:53:13Z
|
2019-03-07T04:53:13Z
|
https://github.com/guohongze/adminset/issues/100
|
[] |
Eddy1210
| 0
|
plotly/dash
|
jupyter
| 3,206
|
Adding Pattern Matching selector ALLELSE
|
Similar to `ALLSMALLER` but doesn't asume that id index are numeric and sequential
## Use case
**General description**
- There's a set of components with the same pattern-matching id type.
- Based on an interaction with one of those components (or with another component with the same pattern-matching id index), something about it will change.
- That same property will also change in the rest of the components of the set, but with a different value.
**Example**
- We have 6 clickable cards.
- Each clickable card is an html.Div with pattern-matching id `{'type':'card,'index': f"card_{i}"}` wrapped in an html.A with pattern-matching id `{'type':'invisible_button,'index': f"card_{i}"}`.
- When a user clicks one card, its `style` changes to HIGHLIGHT_STYLE (red) and the `style` of the rest changes to DEFAULT_STYLE (black).
https://github.com/user-attachments/assets/63e0d473-537c-4df7-bd04-140ad784b449
## Desired behavior/code
```
@callback(
Output({'type':'card','id': MATCH}, 'style'),
Output({'type':'card','id': ALLELSE}, 'style'),
Input({'type':'invisible_button','id': MATCH}, 'n_clicks'),
)
def restore_card_format(n_clicks):
return HIGHLIGHT_STYLE, [DEFAULT_STYLE]*5
```
## Current behavior/code
```
@callback(
Output({'type':'card','id': ALL}, 'style'),
Input({'type':'invisible_button','id': ALL}, 'n_clicks'),
)
def restore_card_format(n_clicks):
return DEFAULT_STYLE
@callback(
Output({'type':'card','id': MATCH}, 'style'),
Input({'type':'invisible_button','id': MATCH}, 'n_clicks'),
)
def highlight_card(n_clicks):
return HIGHLIGHT_STYLE
```
Using the above code produces an error like this:
```
In the callback for output(s): {"id":MATCH,"type":"card"}.style Output 0 ({"id":MATCH,"type":"card"}.style) overlaps another output ({"id":ALL,"type":"card"}.style) used in a different callback.
```
**Workaround - current way to implement this functionality:**
```
from dash import Dash, Input, Output, callback, html, dcc, ALL, MATCH, ctx
app = Dash()
server = app.server
DEFAULT_STYLE = {
"height":"200px",
"width":"200px",
"margin":"5px",
"background-color": "black",
"color":"white"
}
HIGHLIGHT_STYLE = {
"height":"200px",
"width":"200px",
"margin":"5px",
"background-color": "red",
}
app.layout = html.Div([
html.A(
html.Div(
id={'type':'card','index': f"card_{i}"},
style=DEFAULT_STYLE,
children=f"card_{i}"
),
id={'type':'invisible_button','index': f"card_{i}"},
)
for i in range(6)
], style={"display":"inline-flex"})
@callback(
Output({'type':'card','index': ALL}, 'style'),
Input({'type':'invisible_button','index': ALL}, 'n_clicks'),
prevent_initial_call=True
)
def highlight_card(n_clicks):
selected_card_id = ctx.triggered_id["index"]
# this is how ctx.outputs_list looks like
# [{'id': {'index': 'card_0', 'type': 'card'}, 'property': 'style'}, {'id': {'index': 'card_1', 'type': 'card'}, ...]
new_styles = [HIGHLIGHT_STYLE if card["id"]["index"]==selected_card_id else DEFAULT_STYLE for card in ctx.outputs_list]
return new_styles
app.run(debug=True)
```
|
open
|
2025-03-10T17:27:03Z
|
2025-03-11T09:17:39Z
|
https://github.com/plotly/dash/issues/3206
|
[
"feature",
"P2",
"cs"
] |
celia-lm
| 0
|
minimaxir/textgenrnn
|
tensorflow
| 50
|
Why is the number of classes the length of the vocab file plus 1?
|
https://github.com/minimaxir/textgenrnn/blob/91dfec130006d52bf45abc5d8e58369aefecd02e/textgenrnn/textgenrnn.py#L59
Thanks in advance.
|
closed
|
2018-07-20T18:21:02Z
|
2018-07-25T00:08:10Z
|
https://github.com/minimaxir/textgenrnn/issues/50
|
[] |
dmonopoly
| 2
|
cleanlab/cleanlab
|
data-science
| 586
|
Is it possible to use CleanLab for more complex learning tasks than regression or classification?
|
I usally deal with semantic segmentation and instance segmentation (structured prediction), instead of classification. Can CleanLab help with that?
|
closed
|
2023-01-04T21:35:15Z
|
2023-08-10T23:00:04Z
|
https://github.com/cleanlab/cleanlab/issues/586
|
[
"question"
] |
AndreaPi
| 4
|
apify/crawlee-python
|
automation
| 720
|
Implement browser per proxy to PlaywrightCrawler
|
- Implement browser per proxy to PlaywrightCrawler in a similar way as it is in the Crawlee JS.
- https://crawlee.dev/api/browser-pool/interface/LaunchContextOptions#browserPerProxy
- Before implementation sync with @barjin, as he can provide further context and also suggest potential improvements (mostly in context with the session pool).
|
open
|
2024-11-21T19:44:32Z
|
2024-11-21T19:44:44Z
|
https://github.com/apify/crawlee-python/issues/720
|
[
"enhancement",
"t-tooling"
] |
vdusek
| 0
|
giotto-ai/giotto-tda
|
scikit-learn
| 684
|
[BUG] can't download Giotto
|
**Describe the bug**
Basically, I'm walking through the download provided here (https://giottosuite.readthedocs.io/en/master/gettingstarted.html), and I am stuck at the downloading Giotto step with R. I've run the command from here (https://giotto-ai.github.io/gtda-docs/0.5.1/installation.html) to download it, and it seems fine (no errors thrown), but it doesn't do anything ultimately.
**To reproduce**
Steps to reproduce the behavior:
1. Open RGui
2. Run the download command
3. SSL connect error
**Expected behavior**
It should download.
**Actual behaviour**
Error: Failed to install 'unknown package' from GitHub:
SSL connect error
<!-- Thanks for contributing! -->
|
open
|
2023-11-06T20:05:50Z
|
2024-05-30T02:04:23Z
|
https://github.com/giotto-ai/giotto-tda/issues/684
|
[
"bug"
] |
ashahassan
| 2
|
deepspeedai/DeepSpeed
|
machine-learning
| 6,972
|
[BUG] libaio on amd node
|
Hi, I installed libaio as
`apt install libaio-dev`
And I can see both .so and .h exist
```
root@b6410ec8bb69:/code/DeepSpeed# find / -name "libaio.so*" 2>/dev/null
/usr/lib/x86_64-linux-gnu/libaio.so.1
/usr/lib/x86_64-linux-gnu/libaio.so
/usr/lib/x86_64-linux-gnu/libaio.so.1.0.1
root@b6410ec8bb69:/code/DeepSpeed# find / -name "libaio.h" 2>/dev/null
/usr/include/libaio.h
```
And I did setup flags as:
```
echo 'export CFLAGS="-I/usr/include"' >> ~/.bashrc
echo 'export LDFLAGS="-L/usr/lib"' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH="/usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH"' >> ~/.bashrc
source ~/.bashrc
```
but when I do ds_report, it says async_io is not compatible
```
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
async_io ............... [NO] ....... [NO]
fused_adam ............. [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
cpu_adagrad ............ [NO] ....... [OKAY]
cpu_lion ............... [NO] ....... [OKAY]
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
evoformer_attn ......... [NO] ....... [NO]
fp_quantizer ........... [NO] ....... [OKAY]
fused_lamb ............. [NO] ....... [OKAY]
fused_lion ............. [NO] ....... [OKAY]
[WARNING] gds is not compatible with ROCM
gds .................... [NO] ....... [NO]
transformer_inference .. [NO] ....... [OKAY]
inference_core_ops ..... [NO] ....... [OKAY]
cutlass_ops ............ [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
ragged_device_ops ...... [NO] ....... [OKAY]
ragged_ops ............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
[WARNING] sparse_attn is not compatible with ROCM
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch']
torch version .................... 2.3.0a0+gitd2f9472
deepspeed install path ........... ['/code/DeepSpeed/deepspeed']
deepspeed info ................... 0.16.2+unknown, unknown, unknown
torch cuda version ............... None
torch hip version ................ 6.2.41134-65d174c3e
nvcc version ..................... None
deepspeed wheel compiled w. ...... torch 2.3, hip 6.2
shared memory (/dev/shm) size .... 910.48 GB
```
|
open
|
2025-01-25T01:59:00Z
|
2025-02-05T16:54:27Z
|
https://github.com/deepspeedai/DeepSpeed/issues/6972
|
[
"bug",
"training"
] |
GuanhuaWang
| 3
|
open-mmlab/mmdetection
|
pytorch
| 11,318
|
COCODataset instantiation ignores shared memory settings during annotation loading in distributed training
|
**Describe the bug**
The COCODataset implementation uses the COCO API to load in dataset annotations upon initialization. However, since a COCODataset instance is created once for every GPU worker, the dataset annotations get loaded in once for each worker. This works fine for smaller datasets, but with larger datasets this very quickly eats up all system RAM (NOT GPU RAM) when using multiple GPUs. It appears that the BaseDataset class intends to set serialize_data to True by default which should result in the dataset being shared across GPU workers, but this does not appear to work with COCODataset since the actual loading of annotations happens before the data ever has a chance to be serialized.
**Reproduction**
1. What command or script did you run?
```bash
tools/dist_train.sh /path/to/my/config.py 2 --auto-scale-lr
```
2. Did you make any modifications on the code or config? Did you understand what you have modified?
The only modifications I made to my config file were to point it to my dataset location.
3. What dataset did you use?
A 120GB custom COCO format instance segmentation dataset containing a high volume of instances per image (~100-250 instances per image)
**Environment**
```
sys.platform: linux
Python: 3.7.10 (default, Feb 26 2021, 18:47:35) [GCC 7.3.0]
CUDA available: True
numpy_random_seed: 2147483648
GPU 0,1: NVIDIA RTX A5500
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 11.1, V11.1.105
GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
PyTorch: 1.9.0
PyTorch compiling details: PyTorch built with:
- GCC 7.3
- C++ Version: 201402
- Intel(R) oneAPI Math Kernel Library Version 2021.2-Product Build 20210312 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v2.1.2 (Git Hash 98be7e8afa711dc9b66c8ff3504129cb82013cdb)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 11.1
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37
- CuDNN 8.0.5
- Magma 2.5.2
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.1, CUDNN_VERSION=8.0.5, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.9.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON,
TorchVision: 0.10.0
OpenCV: 4.8.1
MMEngine: 0.10.1
MMDetection: 3.2.0+fe3f809
```
**Error traceback**
There is no applicable traceback. During the "loading annotations" phase before training when analyzing system RAM (my personal workstation has 256GB of RAM) the memory usage will steadily climb until it hits 256GB and the worker is killed.
**Bug fix**
See above. This occurs due to the dataset being loaded by the pycocotools API once for every GPU.
|
open
|
2023-12-27T21:04:35Z
|
2024-01-09T14:52:18Z
|
https://github.com/open-mmlab/mmdetection/issues/11318
|
[] |
h-fernand
| 1
|
viewflow/viewflow
|
django
| 19
|
Task state field
|
NEW => ACTIVATED => STARTED => FINISHED
ACTIVATED => CANCELLED
STARTED => CANCELLED
- => ERROR => ACTIVATED
|
closed
|
2014-03-10T04:27:11Z
|
2014-05-01T09:58:11Z
|
https://github.com/viewflow/viewflow/issues/19
|
[
"request/enhancement"
] |
kmmbvnr
| 1
|
ydataai/ydata-profiling
|
data-science
| 1,329
|
Request: Fix the width of tqdm status information
|
### Missing functionality
The tqdm progress bar is nice, but the status information moves around constantly due to embedding column names into status line with variable width. This makes this part of the info unreadable for fast progressing parts.
### Proposed feature
Fix the width of the status component, either by scanning for the largest name, or truncating to something reasonable.
### Alternatives considered
_No response_
### Additional context
_No response_
|
open
|
2023-05-17T21:24:05Z
|
2023-10-17T13:06:10Z
|
https://github.com/ydataai/ydata-profiling/issues/1329
|
[
"code quality 📈",
"Hacktoberfest :fireworks:"
] |
gdevenyi
| 2
|
davidteather/TikTok-Api
|
api
| 863
|
help me write the code.
|
help me write the code. Download videos without watermark.
|
closed
|
2022-03-20T10:59:23Z
|
2024-03-20T10:43:18Z
|
https://github.com/davidteather/TikTok-Api/issues/863
|
[
"bug"
] |
xxkillaxx
| 1
|
dask/dask
|
scikit-learn
| 11,476
|
Add `split_out` parameter to `nunique()`
|
The current function signature for `Series.nunique()` is:
https://github.com/dask/dask/blob/9b4bef654be34e51f14f95fbfc62454018986d8d/dask/dataframe/core.py#L4192-L4194
The default value for `split_out` in `drop_duplicates` is set to 1:
https://github.com/dask/dask/blob/9b4bef654be34e51f14f95fbfc62454018986d8d/dask/dataframe/core.py#L960
While for most datasets `drop_duplicates` will reduce the data enough that it can fit into one partition, this isn't always the case. If the deduplicated data doesn't fit into one partition, Dask will run out of memory and crash.
Setting `drop_duplicates(split_out=...)` to some large value, or to the number of partitions in the original series/data-frame can be used to get around this, but since `split_out` is not exposed to the `nunique()` function, `drop_duplicates()` is always being called with `split_out=1`, meaning that `nunique()` can result in Dask running out of memory.
Is it possible to add `split_out` to `nunique()`'s signature and forward it to the `drop_duplicates()` call to fix this?
```python
@derived_from(pd.Series)
def nunique(self, split_every=None, split_out=1, dropna=True):
uniqs = self.drop_duplicates(split_every=split_every, split_out=split_out)
```
`split_out` is similarly missing for `DataFrame.nunique()`:
https://github.com/dask/dask/blob/9b4bef654be34e51f14f95fbfc62454018986d8d/dask/dataframe/core.py#L6085-L6086
Happy to make these changes if this is sensible.
---
It's interesting to note that `SeriesGroupBy.nunique()` correctly has the `split_out` parameter in its signature.
https://docs.dask.org/en/latest/generated/dask.dataframe.groupby.SeriesGroupBy.nunique.html
|
closed
|
2024-10-31T14:25:48Z
|
2024-10-31T15:10:04Z
|
https://github.com/dask/dask/issues/11476
|
[
"needs triage"
] |
eonu
| 3
|
pyeve/eve
|
flask
| 1,037
|
MONGO_URI raises exception when no database is specified
|
The mongo data layer raises an exception when `MONGO_URI` does not contain a database name. However, the [MongoDB URI syntax](https://docs.mongodb.com/manual/reference/connection-string/#standard-connection-string-format) states that only the host is required and for good reasons.
In certain scenarios, it might be useful to set the ReplicaSet and other options with the URI, and then set the database dynamically with MONGO_DBNAME, maybe depending on the user performing the request.
The current implementation mimics Flask_PyMongo behavior. However, since we recently dropped that dependency, we are free to improve in this area.
|
closed
|
2017-07-06T13:32:01Z
|
2017-07-06T13:38:56Z
|
https://github.com/pyeve/eve/issues/1037
|
[
"enhancement"
] |
nicolaiarocci
| 0
|
521xueweihan/HelloGitHub
|
python
| 2,638
|
oo
|
closed
|
2023-11-06T16:11:12Z
|
2023-11-24T03:32:40Z
|
https://github.com/521xueweihan/HelloGitHub/issues/2638
|
[] |
Joshuasdx
| 0
|
|
saulpw/visidata
|
pandas
| 1,527
|
[DirSheet] `Enter` no longer opens a file from the DirSheet
|
**Small description**
`Enter` no longer opens a file from the DirSheet
**Expected result**
`Enter` opens the file in the current row.
**Actual result with screenshot**
https://asciinema.org/a/Uqck07yZJFLhoTfqEkv3Rw7vQ
**Steps to reproduce with sample data and a .vd**
```
vd sample_data
```
Hit enter on a row for a data file, it doesn't open the file, but does a `pyobj-row`
**Additional context**
Please include the version of VisiData. Close to the latest develop version.
|
closed
|
2022-09-15T22:37:32Z
|
2022-09-17T17:12:33Z
|
https://github.com/saulpw/visidata/issues/1527
|
[
"bug"
] |
frosencrantz
| 2
|
StackStorm/st2
|
automation
| 5,128
|
Add information on setting api key in st2rc.sample.ini
|
Pulled over from st2docs issue: https://github.com/StackStorm/st2docs/issues/1029
### From https://github.com/StackStorm/st2docs/issues/1029
Right now it's nowhere documented on how to set the api key in the cli config file (e.g. `~/.st2/config`)
Add in some additional information to cli reference that documents that.
https://docs.stackstorm.com/reference/cli.html
CLI config entry
```
[credentials]
api_key = <key>
```
|
closed
|
2021-01-27T13:42:18Z
|
2021-06-02T20:10:40Z
|
https://github.com/StackStorm/st2/issues/5128
|
[
"enhancement",
"CLI",
"stale"
] |
kingsleyadam
| 1
|
pallets-eco/flask-sqlalchemy
|
sqlalchemy
| 795
|
Document differences between Flask-SQLAlchemy and SQLAlchemy
|
It would be very useful to have a quick reference to exactly what Flask-SQLAlchemy is abstracting away or ways in which default SQLAlchemy behaviour has been altered.
The reason for this is due to the (well deserved) popularity of this library the majority of SQLAlchemy content refers directly to this version and may not be applicable to "vanilla" SQLAlchemy.
|
closed
|
2019-12-09T20:30:44Z
|
2020-12-05T20:21:39Z
|
https://github.com/pallets-eco/flask-sqlalchemy/issues/795
|
[] |
callamd
| 1
|
horovod/horovod
|
pytorch
| 3,302
|
Is it possible for HVD to support other Collective Communication Librariese as plugins?
|
**Is your feature request related to a problem? Please describe.**
No.
**Describe the solution you'd like**
Adding new Collective Communication Library without changing HVD's code.
**Describe alternatives you've considered**
HVD involve a plugin frameword for CCL extention.
**Additional context**
NA.
|
closed
|
2021-12-08T09:42:56Z
|
2021-12-08T10:34:32Z
|
https://github.com/horovod/horovod/issues/3302
|
[
"enhancement"
] |
hanwei131
| 0
|
BMW-InnovationLab/BMW-YOLOv4-Training-Automation
|
rest-api
| 36
|
Where should we get the converttoyolo.py script ? It is not present in the Label tool repo
|
__
|
closed
|
2022-12-14T05:49:55Z
|
2022-12-14T11:04:24Z
|
https://github.com/BMW-InnovationLab/BMW-YOLOv4-Training-Automation/issues/36
|
[] |
ProjectsCypher
| 1
|
koxudaxi/datamodel-code-generator
|
pydantic
| 2,079
|
`SyntaxError: from __future__ imports must occur at the beginning of the file` when custom header is used
|
**Describe the bug**
When a model is generated with a custom header, importing the resulting model module causes the following error:
```
SyntaxError: from __future__ imports must occur at the beginning of the file
```
This error occurs because the `from __future__` statement must appear near the top of the module (see [documentation](https://docs.python.org/3/reference/simple_stmts.html#future-statements))
**To Reproduce**
Generate model via the provided command below using any schema:
Used commandline:
```
$ datamodel-codegen --input "./openapi.yml" --input-file-type openapi --output ./model.py --output-model-type pydantic_v2.BaseModel --snake-case-field --target-python-version 3.10 --use-schema-description --use-field-description --base-class CustomBaseModel --custom-file-header-path ./custom_pydantic_file_header.py
```
**Expected behavior**
Generated module could be imported without any issues.
**Version:**
- OS: MacOS Sonoma 14.6.1
- Python version: 3.10.14
- datamodel-code-generator version: 0.25.9
**Additional context**
Consider hoisting the future statement if any custom header option is passed.
|
open
|
2024-08-22T15:28:59Z
|
2024-08-22T15:28:59Z
|
https://github.com/koxudaxi/datamodel-code-generator/issues/2079
|
[] |
Dezzley
| 0
|
TencentARC/GFPGAN
|
pytorch
| 240
|
ERROR: Could not build wheels for numpy, which is required to install pyproject.toml-based projects
|
Using python 3.10 on macOS 12.5.1 21G83 arm64
```
...
2 warnings generated.
clang -bundle -undefined dynamic_lookup -L/opt/homebrew/opt/readline/lib -L/opt/homebrew/opt/readline/lib -L/Users/reagle/.pyenv/versions/3.10.6/lib -L/opt/homebrew/lib -Wl,-rpath,/opt/homebrew/lib -L/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/lib -L/opt/homebrew/opt/readline/lib -L/opt/homebrew/opt/readline/lib -L/Users/reagle/.pyenv/versions/3.10.6/lib -L/opt/homebrew/lib -Wl,-rpath,/opt/homebrew/lib -L/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/lib build/temp.macosx-12.5-arm64-3.10/build/src.macosx-12.5-arm64-3.10/numpy/core/src/multiarray/_multiarray_tests.o build/temp.macosx-12.5-arm64-3.10/numpy/core/src/common/mem_overlap.o -Lbuild/temp.macosx-12.5-arm64-3.10 -lnpymath -o build/lib.macosx-12.5-arm64-3.10/numpy/core/_multiarray_tests.cpython-310-darwin.so
building 'numpy.core._multiarray_umath' extension
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -I/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include
creating build/temp.macosx-12.5-arm64-3.10/numpy/core/src/multiarray
creating build/temp.macosx-12.5-arm64-3.10/numpy/core/src/umath
creating build/temp.macosx-12.5-arm64-3.10/build/src.macosx-12.5-arm64-3.10/numpy/core/src/umath
creating build/temp.macosx-12.5-arm64-3.10/build/src.macosx-12.5-arm64-3.10/numpy/core/src/common
creating build/temp.macosx-12.5-arm64-3.10/private
creating build/temp.macosx-12.5-arm64-3.10/private/var
creating build/temp.macosx-12.5-arm64-3.10/private/var/folders
creating build/temp.macosx-12.5-arm64-3.10/private/var/folders/2l
creating build/temp.macosx-12.5-arm64-3.10/private/var/folders/2l/73vdx0sd5rvcn38yg036h6500000gp
creating build/temp.macosx-12.5-arm64-3.10/private/var/folders/2l/73vdx0sd5rvcn38yg036h6500000gp/T
creating build/temp.macosx-12.5-arm64-3.10/private/var/folders/2l/73vdx0sd5rvcn38yg036h6500000gp/T/pip-install-fbbelvi2
creating build/temp.macosx-12.5-arm64-3.10/private/var/folders/2l/73vdx0sd5rvcn38yg036h6500000gp/T/pip-install-fbbelvi2/numpy_ce74744c5c82485781086dfe0358f17f
creating build/temp.macosx-12.5-arm64-3.10/private/var/folders/2l/73vdx0sd5rvcn38yg036h6500000gp/T/pip-install-fbbelvi2/numpy_ce74744c5c82485781086dfe0358f17f/numpy
creating build/temp.macosx-12.5-arm64-3.10/private/var/folders/2l/73vdx0sd5rvcn38yg036h6500000gp/T/pip-install-fbbelvi2/numpy_ce74744c5c82485781086dfe0358f17f/numpy/_build_utils
creating build/temp.macosx-12.5-arm64-3.10/private/var/folders/2l/73vdx0sd5rvcn38yg036h6500000gp/T/pip-install-fbbelvi2/numpy_ce74744c5c82485781086dfe0358f17f/numpy/_build_utils/src
compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -DNO_ATLAS_INFO=3 -DHAVE_CBLAS -Ibuild/src.macosx-12.5-arm64-3.10/numpy/core/src/umath -Ibuild/src.macosx-12.5-arm64-3.10/numpy/core/src/npymath -Ibuild/src.macosx-12.5-arm64-3.10/numpy/core/src/common -Inumpy/core/include -Ibuild/src.macosx-12.5-arm64-3.10/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/Users/reagle/.pyenv/versions/3.10.6/include/python3.10 -Ibuild/src.macosx-12.5-arm64-3.10/numpy/core/src/common -Ibuild/src.macosx-12.5-arm64-3.10/numpy/core/src/npymath -c'
extra options: '-faltivec -I/System/Library/Frameworks/vecLib.framework/Headers'
clang: numpy/core/src/multiarray/alloc.c
clang: numpy/core/src/multiarray/array_assign_scalar.c
clang: numpy/core/src/multiarray/buffer.c
clang: numpy/core/src/multiarray/common.c
clang: numpy/core/src/multiarray/conversion_utils.c
clang: numpy/core/src/multiarray/datetime_strings.c
clang: numpy/core/src/multiarray/descriptor.c
clang: build/src.macosx-12.5-arm64-3.10/numpy/core/src/multiarray/einsum.c
clang: error: the clang compiler does not support 'faltivec', please use -maltivec and include altivec.h explicitly
clang: error: the clang compiler does not support 'faltivec', please use -maltivec and include altivec.h explicitly
clang: error: the clang compiler does not support 'faltivec', please use -maltivec and include altivec.h explicitly
clang: numpy/core/src/multiarray/hashdescr.c
clang: error: the clang compiler does not support 'faltivec', please use -maltivec and include altivec.h explicitly
clang: error: the clang compiler does not support 'faltivec', please use -maltivec and include altivec.h explicitly
clang: numpy/core/src/multiarray/multiarraymodule.c
clang: error: the clang compiler does not support 'faltivec', please use -maltivec and include altivec.h explicitly
clang: numpy/core/src/multiarray/nditer_constr.c
clang: numpy/core/src/multiarray/scalarapi.c
clang: error: the clang compiler does not support 'faltivec', please use -maltivec and include altivec.h explicitly
clang: error: the clang compiler does not support 'faltivec', please use -maltivec and include altivec.h explicitly
clang: build/src.macosx-12.5-arm64-3.10/numpy/core/src/multiarray/lowlevel_strided_loops.c
clang: numpy/core/src/multiarray/vdot.c
clang: numpy/core/src/multiarray/temp_elide.c
clang: numpy/core/src/multiarray/refcount.c
clang: error: the clang compiler does not support 'faltivec', please use -maltivec and include altivec.h explicitly
clang: error: the clang compiler does not support 'faltivec', please use -maltivec and include altivec.h explicitly
clang: build/src.macosx-12.5-arm64-3.10/numpy/core/src/umath/loops.c
clang: numpy/core/src/umath/ufunc_object.c
clang: error: the clang compiler does not support 'faltivec', please use -maltivec and include altivec.h explicitly
clang: error: the clang compiler does not support 'faltivec', please use -maltivec and include altivec.h explicitly
clang: error: the clang compiler does not support 'faltivec', please use -maltivec and include altivec.h explicitly
clang: numpy/core/src/umath/ufunc_type_resolution.c
clang: build/src.macosx-12.5-arm64-3.10/numpy/core/src/npymath/ieee754.c
clang: error: the clang compiler does not support 'faltivec', please use -maltivec and include altivec.h explicitly
clang: numpy/core/src/common/array_assign.c
clang: error: the clang compiler does not support 'faltivec', please use -maltivec and include altivec.h explicitly
clang: error: the clang compiler does not support 'faltivec', please use -maltivec and include altivec.h explicitly
clang: numpy/core/src/common/ucsnarrow.c
clang: build/src.macosx-12.5-arm64-3.10/numpy/core/src/common/npy_cpu_features.c
clang: /private/var/folders/2l/73vdx0sd5rvcn38yg036h6500000gp/T/pip-install-fbbelvi2/numpy_ce74744c5c82485781086dfe0358f17f/numpy/_build_utils/src/apple_sgemv_fix.c
clang: error: the clang compiler does not support 'faltivec', please use -maltivec and include altivec.h explicitly
clang: error: the clang compiler does not support 'faltivec', please use -maltivec and include altivec.h explicitly
clang: error: the clang compiler does not support 'faltivec', please use -maltivec and include altivec.h explicitly
clang: error: the clang compiler does not support 'faltivec', please use -maltivec and include altivec.h explicitly
clang: error: the clang compiler does not support 'faltivec', please use -maltivec and include altivec.h explicitly
clang: error: the clang compiler does not support 'faltivec', please use -maltivec and include altivec.h explicitly
clang: error: the clang compiler does not support 'faltivec', please use -maltivec and include altivec.h explicitly
clang: error: the clang compiler does not support 'faltivec', please use -maltivec and include altivec.h explicitly
error: Command "clang -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -I/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include -DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -DNO_ATLAS_INFO=3 -DHAVE_CBLAS -Ibuild/src.macosx-12.5-arm64-3.10/numpy/core/src/umath -Ibuild/src.macosx-12.5-arm64-3.10/numpy/core/src/npymath -Ibuild/src.macosx-12.5-arm64-3.10/numpy/core/src/common -Inumpy/core/include -Ibuild/src.macosx-12.5-arm64-3.10/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/Users/reagle/.pyenv/versions/3.10.6/include/python3.10 -Ibuild/src.macosx-12.5-arm64-3.10/numpy/core/src/common -Ibuild/src.macosx-12.5-arm64-3.10/numpy/core/src/npymath -c numpy/core/src/multiarray/buffer.c -o build/temp.macosx-12.5-arm64-3.10/numpy/core/src/multiarray/buffer.o -MMD -MF build/temp.macosx-12.5-arm64-3.10/numpy/core/src/multiarray/buffer.o.d -faltivec -I/System/Library/Frameworks/vecLib.framework/Headers" failed with exit status 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for numpy
Failed to build numpy
ERROR: Could not build wheels for numpy, which is required to install pyproject.toml-based projects
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
❯
```
|
open
|
2022-08-22T20:05:26Z
|
2022-08-22T20:05:26Z
|
https://github.com/TencentARC/GFPGAN/issues/240
|
[] |
reagle
| 0
|
plotly/dash-table
|
dash
| 167
|
rename `fe` and `be` in pagination_mode?
|
- `fe` and `be` are too terse
- Many users aren't necessarily aware of the development terms "front-end" or "back-end"
- However, the name will need to refer to these concepts in some way
- Alternatives:
- `client` / `server`
- `browser` / `callback`
- `frontend` / `backend`
|
closed
|
2018-10-24T18:28:38Z
|
2019-06-27T00:20:43Z
|
https://github.com/plotly/dash-table/issues/167
|
[] |
chriddyp
| 1
|
dynaconf/dynaconf
|
flask
| 472
|
[RFC] add an option to auto-output all vars after the settings are loaded [was] Is is possible to export all settings of a running service?
|
It is a great feature that env_vars can override any setting loaded from files.
For diagnose, debug etc. many times it is useful to be able to extract the configuration of a running service (perhaps it loaded it from database, files and env_vars).
Do we have any means to get all `DYNACONF` settings that a running service is using?
Would it be possible to auto-export all config to env vars after loading?
|
closed
|
2020-11-12T08:31:36Z
|
2022-07-02T20:12:27Z
|
https://github.com/dynaconf/dynaconf/issues/472
|
[
"wontfix",
"RFC"
] |
jruizaranguren
| 3
|
httpie/cli
|
python
| 634
|
Typo on the home page - "OSX"
|
On https://httpie.org/ "OSX" should be either "OS X" or alternatively the new name is "macOS".
Thanks for httpie!
|
closed
|
2017-11-19T08:37:06Z
|
2017-12-17T19:36:11Z
|
https://github.com/httpie/cli/issues/634
|
[] |
shlomif
| 2
|
AntonOsika/gpt-engineer
|
python
| 670
|
Make improve flag less intrusive by moving over files like "all_output.txt" and "file_list" to the .gpteng folder
|
This is done by simply using the new DB in #665 and writing to it
|
closed
|
2023-09-03T14:32:01Z
|
2023-09-22T09:12:33Z
|
https://github.com/AntonOsika/gpt-engineer/issues/670
|
[
"enhancement",
"good first issue"
] |
AntonOsika
| 3
|
dynaconf/dynaconf
|
flask
| 325
|
[bug] Dynaconf + Flask + DotEnv
|
**Describe the bug**
One 'thing' happen when exist .env on the project with flask and dotenv. The flash no longer runs.
**To Reproduce**
[Look this repo to reproduce](https://github.com/Bernardoow/project_test)
**Additional context**
[This is the problem. ](https://github.com/pallets/flask/blob/024f0d384cf5bb65c76ac59f8ddce464b2dc2ca1/docs/cli.rst#disable-dotenv)
|
closed
|
2020-04-05T18:19:40Z
|
2020-07-27T20:51:10Z
|
https://github.com/dynaconf/dynaconf/issues/325
|
[
"bug",
"Pending Release"
] |
Bernardoow
| 0
|
dask/dask
|
scikit-learn
| 11,282
|
Automatically rechunk in array-shuffle if groups are too large
|
shuffling an array can create big chunks along the shuffled dimension. Keeping the other chunk sizes constant will potentially blow up chunk sizes along the way.
The proper solution here is to rechunk on the other dimensions to be able to ensure that chunk sizes stay reasonable.
We should be smart here to ensure that the rechunking is only a splitting of existing chunks and not an all to all communication is possible, even if we don't end up with ideal chunk sizes.
Additionally, we should expose a user option to give us explicit chunks if needed (although I hope that we can make the default smart enough that this is rarely needed)
cc @dcherian
|
closed
|
2024-08-07T19:39:12Z
|
2024-08-16T14:20:19Z
|
https://github.com/dask/dask/issues/11282
|
[
"array"
] |
phofl
| 0
|
marimo-team/marimo
|
data-science
| 3,979
|
Reactive Tests not working (for me)
|
### Describe the bug
Hi -- I have tried using Reactive tests as added in #3938 , but was not able to trigger pytest to run reactively (after enabling the experimental flag of course)

I have pytest in my venv deps. Not sure how to debug this further, but this is a great feature and wouldlove to be able to use it.
### Environment
<details>
```
{
"marimo": "0.11.14",
"OS": "Darwin",
"OS Version": "24.3.0",
"Processor": "arm",
"Python Version": "3.12.7",
"Binaries": {
"Browser": "133.0.6943.142",
"Node": "v23.6.0"
},
"Dependencies": {
"click": "8.1.8",
"docutils": "0.21.2",
"itsdangerous": "2.2.0",
"jedi": "0.19.2",
"markdown": "3.7",
"narwhals": "1.25.0",
"packaging": "24.2",
"psutil": "5.9.8",
"pygments": "2.18.0",
"pymdown-extensions": "10.14.3",
"pyyaml": "6.0.2",
"ruff": "0.9.4",
"starlette": "0.45.3",
"tomlkit": "0.13.2",
"typing-extensions": "4.12.2",
"uvicorn": "0.34.0",
"websockets": "14.2"
},
"Optional Dependencies": {
"altair": "5.5.0",
"anywidget": "0.9.13",
"duckdb": "1.1.3",
"pandas": "2.2.3",
"polars": "1.18.0",
"pyarrow": "18.1.0"
},
"Experimental Flags": {
"reactive_tests": true
}
}
```
</details>
### Code to reproduce
_No response_
|
closed
|
2025-03-04T20:20:56Z
|
2025-03-04T22:47:26Z
|
https://github.com/marimo-team/marimo/issues/3979
|
[
"bug"
] |
habemus-papadum
| 1
|
tfranzel/drf-spectacular
|
rest-api
| 837
|
Response sample
|
closed
|
2022-10-22T11:47:29Z
|
2022-10-22T11:58:30Z
|
https://github.com/tfranzel/drf-spectacular/issues/837
|
[] |
mugane-dj
| 0
|
|
mlflow/mlflow
|
machine-learning
| 14,865
|
[BUG] `Add New Tag` does not clear the previous text
|
### MLflow version
2.20.4.dev0
### System information
- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Mac
- **Python version**: 3.9
- **yarn version, if running the dev UI**: 1.22
### Describe the problem
The previous tag info is not cleared
https://github.com/user-attachments/assets/4e199142-f09a-4b88-b93d-6910c1077ca4
### Steps to reproduce the bug
- Select at lease one run
- Click `Add Tag` to add tag
- Click `Add Tag` again, see the previous text
### Code to generate data required to reproduce the bug
_No response_
### Is the console panel in DevTools showing errors relevant to the bug?
_No response_
### Does the network panel in DevTools contain failed requests relevant to the bug?
_No response_
|
closed
|
2025-03-05T13:13:59Z
|
2025-03-10T09:33:07Z
|
https://github.com/mlflow/mlflow/issues/14865
|
[
"bug",
"area/uiux",
"has-closing-pr"
] |
Gumichocopengin8
| 1
|
lukas-blecher/LaTeX-OCR
|
pytorch
| 405
|
Can not open GUI with latexcor
|
(p9) PS D:\LaTeX-OCR> latexocr
Traceback (most recent call last):
File "C:\Users\50332\anaconda3\envs\p9\lib\runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\50332\anaconda3\envs\p9\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\50332\anaconda3\envs\p9\Scripts\latexocr.exe\__main__.py", line 7, in <module>
File "C:\Users\50332\anaconda3\envs\p9\lib\site-packages\pix2tex\__main__.py", line 25, in main
from .gui import main
File "C:\Users\50332\anaconda3\envs\p9\lib\site-packages\pix2tex\gui.py", line 21, in <module>
import pix2tex.resources.resources
File "C:\Users\50332\anaconda3\envs\p9\lib\site-packages\pix2tex\resources\resources.py", line 6, in <module>
from PySide6 import QtCore
ImportError: DLL load failed while importing QtCore: 找不到指定的程序。
|
open
|
2024-11-27T08:44:48Z
|
2024-12-18T13:31:13Z
|
https://github.com/lukas-blecher/LaTeX-OCR/issues/405
|
[] |
VIRTUALWORLDHEAD0CAICAI
| 8
|
microsoft/MMdnn
|
tensorflow
| 833
|
Mxnet to tensorflow
|
Platform (like ubuntu 16.04/win10): Google Colab
Python version: 3.6.9
Source framework with version : MXNET with GPU
Destination framework with version : Tensorflow 1.14 GPU
Pre-trained model path (webpath or webdisk path): [this](https://github.com/YonghaoHe/A-Light-and-Fast-Face-Detector-for-Edge-Devices/blob/master/head_detection/saved_model/configuration_10_160_17L_4scales_v1_2019-09-20-13-08-26/train_10_160_17L_4scales_v1_iter_800000.params)
Running scripts:
I am trying to convert [this](https://github.com/YonghaoHe/A-Light-and-Fast-Face-Detector-for-Edge-Devices/blob/master/head_detection/symbol_farm/symbol_10_160_17L_4scales_v1_deploy.json) to IR form and then to tensorflow. But I am facing issues as below
**Code :**
!mmtoir -f mxnet -n symbol_10_160_17L_4scales_v1_deploy.json -d rand_scl --inputShape 3,256,256
**Error messages**
/usr/local/lib/python3.6/dist-packages/mxnet/module/base_module.py:55: UserWarning: You created Module with Module(..., label_names=['softmax_label']) but input with name 'softmax_label' is not found in symbol.list_arguments(). Did you mean one of:
data
warnings.warn(msg)
Warning: MXNet Parser has not supported operator null with name data.
Warning: convert the null operator with name [data] into input layer.
Warning: MXNet Parser has not supported operator slice_axis with name slice_axis16.
Traceback (most recent call last):
File "/usr/local/bin/mmtoir", line 8, in <module>
sys.exit(_main())
File "/usr/local/lib/python3.6/dist-packages/mmdnn/conversion/_script/convertToIR.py", line 192, in _main
ret = _convert(args)
File "/usr/local/lib/python3.6/dist-packages/mmdnn/conversion/_script/convertToIR.py", line 115, in _convert
parser.run(args.dstPath)
File "/usr/local/lib/python3.6/dist-packages/mmdnn/conversion/common/DataStructure/parser.py", line 22, in run
self.gen_IR()
File "/usr/local/lib/python3.6/dist-packages/mmdnn/conversion/mxnet/mxnet_parser.py", line 263, in gen_IR
self.rename_UNKNOWN(current_node)
File "/usr/local/lib/python3.6/dist-packages/mmdnn/conversion/mxnet/mxnet_parser.py", line 374, in rename_UNKNOWN
raise NotImplementedError()
NotImplementedError
Could you please assist me overthis?
|
open
|
2020-05-07T18:48:45Z
|
2020-05-08T18:04:48Z
|
https://github.com/microsoft/MMdnn/issues/833
|
[] |
Manideep08
| 1
|
gradio-app/gradio
|
deep-learning
| 9,943
|
如何去除网页最下面的使用 Gradio 构建标志logo
|
- [ ] I have searched to see if a similar issue already exists.
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Additional context**
Add any other context or screenshots about the feature request here.
|
closed
|
2024-11-12T09:52:20Z
|
2024-11-12T21:03:17Z
|
https://github.com/gradio-app/gradio/issues/9943
|
[] |
yang123456he
| 1
|
Significant-Gravitas/AutoGPT
|
python
| 8,815
|
Agent Page - Hook Up “Run Agent” Button on Agent Page
|
The “Run Agent” button on the Agent Page needs to be updated to add the selected agent to the library. This includes:
1\. Modifying the button’s functionality to trigger the “Add Agent to Library” action.
2\. Integrating the necessary backend API endpoints to support this functionality.
# **Acceptance Criteria:**
• The “Run Agent” button successfully adds the agent to the library.
• API integration is completed and tested.
• UI updates (if any) are aligned with the new functionality.
|
closed
|
2024-11-27T13:06:02Z
|
2024-12-18T13:01:52Z
|
https://github.com/Significant-Gravitas/AutoGPT/issues/8815
|
[
"bug",
"UI",
"platform/frontend"
] |
Swiftyos
| 0
|
piskvorky/gensim
|
nlp
| 3,181
|
Mismatch get_coherence_per_topic and get_coherence for single topic
|
#### Problem description
Hi! I am using Gensim to compute the NPMI coherence for each of my topics. I used the method `get_coherence_per_topic()` and also `get_coherence()` (in this case, just passing a list with a single topic), and I noticed that the coherences per topic do not match with the ones returned by `get_coherence()` of the corresponding topics. In my understanding, the NPMI of a topic should be independent of the number of topics or of the other input topics.
This happens also with the other c_* coherences, not with the UMASS version.
Thank you!
#### Steps/code/corpus to reproduce
```python
from gensim.test.utils import common_texts, common_dictionary
from gensim.models.ldamodel import LdaModel
from gensim.models.coherencemodel import CoherenceModel
topics = [
['human', 'computer', 'system', 'interface'],
['graph', 'minors', 'trees', 'eps']
]
cm = CoherenceModel(topics=topics, texts=common_texts, coherence='c_npmi',
dictionary=common_dictionary)
coherence = cm.get_coherence_per_topic()
print(coherence) # got [0.23583958321789514, -0.24456941091456053]
cm_topic0 = CoherenceModel(topics=[topics[0]], texts=common_texts,
coherence='c_npmi', dictionary=common_dictionary)
coherence_topic0 = cm_topic0.get_coherence()
print(coherence_topic0) # expect this to be == coherence[0] but got -0.14624062517782566
cm_topic1 = CoherenceModel(topics=[topics[1]], texts=common_texts,
coherence='c_npmi', dictionary=common_dictionary)
coherence_topic1 = cm_topic1.get_coherence()
print(coherence_topic1) # expect this to be == coherence[1] but got -0.31633310918174923
```
#### Versions
Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
Python 3.7.10 (default, May 3 2021, 02:48:31)
[GCC 7.5.0]
Bits 64
NumPy 1.19.5
SciPy 1.4.1
gensim 3.8.3
FAST_VERSION 1
|
closed
|
2021-06-22T09:42:36Z
|
2022-04-25T08:21:06Z
|
https://github.com/piskvorky/gensim/issues/3181
|
[] |
silviatti
| 0
|
twelvedata/twelvedata-python
|
matplotlib
| 20
|
[Question]websocket.TDWebSocket.keep_alive() high cpu usage
|
websocket.TDWebSocket.keep_alive() is simply an infinite loop, which I've assumed is to keep the main thread alive and results in high cpu usage. Does having a time.sleep() in here affect the performance of the websocket connection? I personally wouldn't think so, but I'm testing while the market is closed.
` @staticmethod
def keep_alive():
while True:
pass
`
Thanks
|
closed
|
2020-10-24T23:12:31Z
|
2020-11-08T08:05:38Z
|
https://github.com/twelvedata/twelvedata-python/issues/20
|
[] |
willhess92
| 1
|
flasgger/flasgger
|
rest-api
| 389
|
Is there reference documentation for this project?
|
http://flasgger.pythonanywhere.com/ just has "Coming Soon".
|
closed
|
2020-04-19T01:19:46Z
|
2020-04-22T00:31:16Z
|
https://github.com/flasgger/flasgger/issues/389
|
[] |
derekbekoe
| 2
|
pytorch/pytorch
|
numpy
| 149,725
|
`torch.compile` does not work when `set_priority` is specified in `sdpa_kernel`
|
### 🐛 Describe the bug
Model compilation does not work when the `set_priority` kwarg is provied to the `sdpa_kernel` context manager. See example below.
```py
import torch
from torch.nn.attention import SDPBackend, sdpa_kernel
from torch.nn.functional import scaled_dot_product_attention
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.o = torch.nn.Linear(64, 128)
def forward(self, q, k, v, mask):
with sdpa_kernel(backends=[SDPBackend.EFFICIENT_ATTENTION, SDPBackend.MATH], set_priority=True):
out = scaled_dot_product_attention(
query=q,
key=k,
value=v,
attn_mask=mask,
is_causal=False,
scale=1.0,
)
return out
model = Model().to("cuda:0")
model = torch.compile(model)
q = torch.randn(32, 1, 10, 64).to("cuda:0")
k = torch.randn(32, 1, 6, 64).to("cuda:0")
v = torch.randn(32, 1, 6, 64).to("cuda:0")
mask = torch.ones(32, 1, 10, 6).to("cuda:0")
model(q, k, v, mask)
```
Fails with:
```
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
Cell In[2], line 32
29 v = torch.randn(32, 1, 6, 64).to("cuda:0")
30 mask = torch.ones(32, 1, 10, 6).to("cuda:0")
---> 32 model(q, k, v, mask)
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/nn/modules/module.py:1739, in Module._wrapped_call_impl(self, *args, **kwargs)
1737 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1738 else:
-> 1739 return self._call_impl(*args, **kwargs)
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/nn/modules/module.py:1750, in Module._call_impl(self, *args, **kwargs)
1745 # If we don't have any hooks, we want to skip the rest of the logic in
1746 # this function, and just call forward.
1747 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1748 or _global_backward_pre_hooks or _global_backward_hooks
1749 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1750 return forward_call(*args, **kwargs)
1752 result = None
1753 called_always_called_hooks = set()
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py:574, in _TorchDynamoContext.__call__.<locals>._fn(*args, **kwargs)
569 saved_dynamic_layer_stack_depth = (
570 torch._C._functorch.get_dynamic_layer_stack_depth()
571 )
573 try:
--> 574 return fn(*args, **kwargs)
575 finally:
576 # Restore the dynamic layer stack depth if necessary.
577 torch._C._functorch.pop_dynamic_layer_stack_and_undo_to_depth(
578 saved_dynamic_layer_stack_depth
579 )
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/nn/modules/module.py:1739, in Module._wrapped_call_impl(self, *args, **kwargs)
1737 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1738 else:
-> 1739 return self._call_impl(*args, **kwargs)
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/nn/modules/module.py:1750, in Module._call_impl(self, *args, **kwargs)
1745 # If we don't have any hooks, we want to skip the rest of the logic in
1746 # this function, and just call forward.
1747 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1748 or _global_backward_pre_hooks or _global_backward_hooks
1749 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1750 return forward_call(*args, **kwargs)
1752 result = None
1753 called_always_called_hooks = set()
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:1380, in CatchErrorsWrapper.__call__(self, frame, cache_entry, frame_state)
1374 return hijacked_callback(
1375 frame, cache_entry, self.hooks, frame_state
1376 )
1378 with compile_lock, _disable_current_modes():
1379 # skip=1: skip this frame
-> 1380 return self._torchdynamo_orig_callable(
1381 frame, cache_entry, self.hooks, frame_state, skip=1
1382 )
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:1164, in ConvertFrame.__call__(self, frame, cache_entry, hooks, frame_state, skip)
1162 counters["frames"]["total"] += 1
1163 try:
-> 1164 result = self._inner_convert(
1165 frame, cache_entry, hooks, frame_state, skip=skip + 1
1166 )
1167 counters["frames"]["ok"] += 1
1168 return result
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:547, in ConvertFrameAssert.__call__(self, frame, cache_entry, hooks, frame_state, skip)
544 dynamo_tls.traced_frame_infos.append(info)
546 with compile_context(CompileContext(compile_id)):
--> 547 return _compile(
548 frame.f_code,
549 frame.f_globals,
550 frame.f_locals,
551 frame.f_builtins,
552 frame.closure,
553 self._torchdynamo_orig_callable,
554 self._one_graph,
555 self._export,
556 self._export_constraints,
557 hooks,
558 cache_entry,
559 cache_size,
560 frame,
561 frame_state=frame_state,
562 compile_id=compile_id,
563 skip=skip + 1,
564 )
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:986, in _compile(code, globals, locals, builtins, closure, compiler_fn, one_graph, export, export_constraints, hooks, cache_entry, cache_size, frame, frame_state, compile_id, skip)
984 guarded_code = None
985 try:
--> 986 guarded_code = compile_inner(code, one_graph, hooks, transform)
988 # NB: We only put_code_state in success case. Success case here
989 # does include graph breaks; specifically, if a graph break still
990 # resulted in a partially compiled graph, we WILL return here. An
(...)
995 # to upload for graph break though, because this can prevent
996 # extra graph break compilations.)
997 put_code_state()
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:715, in _compile.<locals>.compile_inner(code, one_graph, hooks, transform)
713 stack.enter_context(torch._dynamo.callback_handler.install_callbacks())
714 stack.enter_context(CompileTimeInstructionCounter.record())
--> 715 return _compile_inner(code, one_graph, hooks, transform)
717 return None
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/_utils_internal.py:95, in compile_time_strobelight_meta.<locals>.compile_time_strobelight_meta_inner.<locals>.wrapper_function(*args, **kwargs)
92 kwargs["skip"] = skip + 1
94 if not StrobelightCompileTimeProfiler.enabled:
---> 95 return function(*args, **kwargs)
97 return StrobelightCompileTimeProfiler.profile_compile_time(
98 function, phase_name, *args, **kwargs
99 )
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:750, in _compile.<locals>._compile_inner(code, one_graph, hooks, transform)
748 CompileContext.get().attempt = attempt
749 try:
--> 750 out_code = transform_code_object(code, transform)
751 break
752 except exc.RestartAnalysis as e:
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/_dynamo/bytecode_transformation.py:1361, in transform_code_object(code, transformations, safe)
1358 instructions = cleaned_instructions(code, safe)
1359 propagate_line_nums(instructions)
-> 1361 transformations(instructions, code_options)
1362 return clean_and_assemble_instructions(instructions, keys, code_options)[1]
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:231, in preserve_global_state.<locals>._fn(*args, **kwargs)
229 exit_stack.enter_context(torch_function_mode_stack_state_mgr)
230 try:
--> 231 return fn(*args, **kwargs)
232 finally:
233 cleanup.close()
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:662, in _compile.<locals>.transform(instructions, code_options)
660 try:
661 with tracing(tracer.output.tracing_context), tracer.set_current_tx():
--> 662 tracer.run()
663 except exc.UnspecializeRestartAnalysis:
664 speculation_log.clear()
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:2868, in InstructionTranslator.run(self)
2867 def run(self):
-> 2868 super().run()
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:1052, in InstructionTranslatorBase.run(self)
1050 try:
1051 self.output.push_tx(self)
-> 1052 while self.step():
1053 pass
1054 except TensorifyScalarRestartAnalysis:
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:962, in InstructionTranslatorBase.step(self)
959 self.update_block_stack(inst)
961 try:
--> 962 self.dispatch_table[inst.opcode](self, inst)
963 return not self.output.should_exit
964 except TensorifyScalarRestartAnalysis:
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:659, in break_graph_if_unsupported.<locals>.decorator.<locals>.wrapper(self, inst)
657 return handle_graph_break(self, inst, speculation.reason)
658 try:
--> 659 return inner_fn(self, inst)
660 except Unsupported as excp:
661 if self.generic_context_manager_depth > 0:
662 # We don't support graph break under GenericContextWrappingVariable,
663 # If there is, we roll back to the checkpoint and fall back.
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:2341, in InstructionTranslatorBase.CALL(self, inst)
2339 @break_graph_if_unsupported(push=1)
2340 def CALL(self, inst):
-> 2341 self._call(inst)
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:2335, in InstructionTranslatorBase._call(self, inst, call_kw)
2330 kwargs = {}
2332 try:
2333 # if call_function fails, need to set kw_names to None, otherwise
2334 # a subsequent call may have self.kw_names set to an old value
-> 2335 self.call_function(fn, args, kwargs)
2336 finally:
2337 self.kw_names = None
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:897, in InstructionTranslatorBase.call_function(self, fn, args, kwargs)
895 if inner_fn and callable(inner_fn) and is_forbidden(inner_fn):
896 raise AssertionError(f"Attempt to trace forbidden callable {inner_fn}")
--> 897 self.push(fn.call_function(self, args, kwargs))
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/_dynamo/variables/torch.py:352, in TorchCtxManagerClassVariable.call_function(self, tx, args, kwargs)
348 return FSDPParamGroupUseTrainingStateVariable.create(
349 tx, args[0], args[1].as_python_constant()
350 )
351 elif self.value is torch.nn.attention.sdpa_kernel:
--> 352 assert len(args) == 1 or (len(kwargs) == 1 and "backends" in kwargs)
353 backends = args[0] if len(args) == 1 else kwargs["backends"]
354 return SDPAKernelVariable.create(tx, backends.as_python_constant())
AssertionError:
from user code:
File "/tmp/ipykernel_3180308/3479898774.py", line 12, in forward
with sdpa_kernel(backends=[SDPBackend.EFFICIENT_ATTENTION, SDPBackend.MATH], set_priority=True):
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-1021-aws-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 550.144.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 7
BogoMIPS: 6000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.5 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 71.5 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Retpoline
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] numpy 2.1.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @chauhang @penguinwu
|
closed
|
2025-03-21T12:19:12Z
|
2025-03-21T19:19:54Z
|
https://github.com/pytorch/pytorch/issues/149725
|
[
"oncall: pt2",
"module: sdpa"
] |
abdulfatir
| 2
|
xorbitsai/xorbits
|
numpy
| 417
|
BUG: ``hash_dataframe_on`` failed when ``on`` is callable on GPU
|
Note that the issue tracker is NOT the place for general support. For
discussions about development, questions about usage, or any general questions,
contact us on https://discuss.xorbits.io/.
|
closed
|
2023-04-28T08:49:28Z
|
2023-05-15T06:45:29Z
|
https://github.com/xorbitsai/xorbits/issues/417
|
[
"bug",
"gpu"
] |
ChengjieLi28
| 0
|
plotly/plotly.py
|
plotly
| 4,198
|
5.14.1: seems like test suite needs to be updated for `numpy`
|
I'm packaging your module as an rpm package so I'm using the typical PEP517 based build, install and test cycle used on building packages from non-root account.
- `python3 -sBm build -w --no-isolation`
- because I'm calling `build` with `--no-isolation` I'm using during all processes only locally installed modules
- install .whl file in </install/prefix> using 'installer` module
- run pytest with $PYTHONPATH pointing to sitearch and sitelib inside </install/prefix>
- build is performed in env which is *`cut off from access to the public network`* (pytest is executed with `-m "not network"`)
`numpy` 1.24.3.
Here is pytest output:
<details>
```console
+ PYTHONPATH=/home/tkloczko/rpmbuild/BUILDROOT/python-plotly-5.14.1-2.fc35.x86_64/usr/lib64/python3.8/site-packages:/home/tkloczko/rpmbuild/BUILDROOT/python-plotly-5.14.1-2.fc35.x86_64/usr/lib/python3.8/site-packages
+ /usr/bin/pytest -ra -m 'not network'
==================================================================================== test session starts ====================================================================================
platform linux -- Python 3.8.16, pytest-7.3.1, pluggy-1.0.0
rootdir: /home/tkloczko/rpmbuild/BUILD/plotly.py-5.14.1/packages/python/plotly
configfile: pytest.ini
plugins: anyio-3.6.2
collected 2628 items / 1 error
========================================================================================== ERRORS ===========================================================================================
_________________________________________________________ ERROR collecting _plotly_utils/tests/validators/test_integer_validator.py _________________________________________________________
_plotly_utils/tests/validators/test_integer_validator.py:77: in <module>
@pytest.mark.parametrize("val", [-2, -123, np.iinfo(np.int).min])
/usr/lib64/python3.8/site-packages/numpy/__init__.py:305: in __getattr__
raise AttributeError(__former_attrs__[attr])
E AttributeError: module 'numpy' has no attribute 'int'.
E `np.int` was a deprecated alias for the builtin `int`. To avoid this error in existing code, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
E The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
E https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
===================================================================================== warnings summary ======================================================================================
_plotly_utils/tests/validators/test_enumerated_validator.py:17
/home/tkloczko/rpmbuild/BUILD/plotly.py-5.14.1/packages/python/plotly/_plotly_utils/tests/validators/test_enumerated_validator.py:17: DeprecationWarning:
invalid escape sequence \d
_plotly_utils/tests/validators/test_enumerated_validator.py:29
/home/tkloczko/rpmbuild/BUILD/plotly.py-5.14.1/packages/python/plotly/_plotly_utils/tests/validators/test_enumerated_validator.py:29: DeprecationWarning:
invalid escape sequence \d
../../../../../../../../usr/lib/python3.8/site-packages/jupyter_client/connect.py:20
/usr/lib/python3.8/site-packages/jupyter_client/connect.py:20: DeprecationWarning:
Jupyter is migrating its paths to use standard platformdirs
given by the platformdirs library. To remove this warning and
see the appropriate new directories, set the environment variable
`JUPYTER_PLATFORM_DIRS=1` and then run `jupyter --paths`.
The use of platformdirs will be the default in `jupyter_core` v6
../../../../../../../../usr/lib/python3.8/site-packages/traitlets/traitlets.py:1016: 100 warnings
/usr/lib/python3.8/site-packages/traitlets/traitlets.py:1016: DeprecationWarning:
Widget._active_widgets is deprecated.
../../../../../../../../usr/lib/python3.8/site-packages/traitlets/traitlets.py:1016: 100 warnings
/usr/lib/python3.8/site-packages/traitlets/traitlets.py:1016: DeprecationWarning:
Widget._widget_types is deprecated.
../../../../../../../../usr/lib/python3.8/site-packages/traitlets/traitlets.py:1016: 100 warnings
/usr/lib/python3.8/site-packages/traitlets/traitlets.py:1016: DeprecationWarning:
Widget.widget_types is deprecated.
../../../../../../../../usr/lib/python3.8/site-packages/traitlets/traitlets.py:1016: 100 warnings
/usr/lib/python3.8/site-packages/traitlets/traitlets.py:1016: DeprecationWarning:
Widget.widgets is deprecated.
../../../../../../../../usr/lib/python3.8/site-packages/ipywidgets/widgets/widget.py:528
/usr/lib/python3.8/site-packages/ipywidgets/widgets/widget.py:528: DeprecationWarning:
The `ipykernel.comm.Comm` class has been deprecated. Please use the `comm` module instead.For creating comms, use the function `from comm import create_comm`.
plotly/tests/test_core/test_subplots/test_make_subplots.py:1945
/home/tkloczko/rpmbuild/BUILD/plotly.py-5.14.1/packages/python/plotly/plotly/tests/test_core/test_subplots/test_make_subplots.py:1945: DeprecationWarning:
invalid escape sequence \(
plotly/tests/test_core/test_subplots/test_make_subplots.py:1947
/home/tkloczko/rpmbuild/BUILD/plotly.py-5.14.1/packages/python/plotly/plotly/tests/test_core/test_subplots/test_make_subplots.py:1947: DeprecationWarning:
invalid escape sequence \.
plotly/tests/test_core/test_subplots/test_make_subplots.py:1948
/home/tkloczko/rpmbuild/BUILD/plotly.py-5.14.1/packages/python/plotly/plotly/tests/test_core/test_subplots/test_make_subplots.py:1948: DeprecationWarning:
invalid escape sequence \(
plotly/tests/test_core/test_subplots/test_make_subplots.py:1957
/home/tkloczko/rpmbuild/BUILD/plotly.py-5.14.1/packages/python/plotly/plotly/tests/test_core/test_subplots/test_make_subplots.py:1957: DeprecationWarning:
invalid escape sequence \(
plotly/tests/test_core/test_subplots/test_make_subplots.py:1959
/home/tkloczko/rpmbuild/BUILD/plotly.py-5.14.1/packages/python/plotly/plotly/tests/test_core/test_subplots/test_make_subplots.py:1959: DeprecationWarning:
invalid escape sequence \.
plotly/tests/test_core/test_subplots/test_make_subplots.py:1960
/home/tkloczko/rpmbuild/BUILD/plotly.py-5.14.1/packages/python/plotly/plotly/tests/test_core/test_subplots/test_make_subplots.py:1960: DeprecationWarning:
invalid escape sequence \(
plotly/tests/test_core/test_subplots/test_make_subplots.py:2002
/home/tkloczko/rpmbuild/BUILD/plotly.py-5.14.1/packages/python/plotly/plotly/tests/test_core/test_subplots/test_make_subplots.py:2002: DeprecationWarning:
invalid escape sequence \.
plotly/tests/test_core/test_subplots/test_make_subplots.py:2006
/home/tkloczko/rpmbuild/BUILD/plotly.py-5.14.1/packages/python/plotly/plotly/tests/test_core/test_subplots/test_make_subplots.py:2006: DeprecationWarning:
invalid escape sequence \.
plotly/tests/test_core/test_subplots/test_make_subplots.py:2010
/home/tkloczko/rpmbuild/BUILD/plotly.py-5.14.1/packages/python/plotly/plotly/tests/test_core/test_subplots/test_make_subplots.py:2010: DeprecationWarning:
invalid escape sequence \.
plotly/tests/test_core/test_subplots/test_make_subplots.py:2014
/home/tkloczko/rpmbuild/BUILD/plotly.py-5.14.1/packages/python/plotly/plotly/tests/test_core/test_subplots/test_make_subplots.py:2014: DeprecationWarning:
invalid escape sequence \.
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
================================================================================== short test summary info ==================================================================================
ERROR _plotly_utils/tests/validators/test_integer_validator.py - AttributeError: module 'numpy' has no attribute 'int'.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
============================================================================== 414 warnings, 1 error in 11.20s ==============================================================================
```
</details>
Here is list of installed modules in build env
<details>
```console
Package Version
----------------------------- -----------------
aiofiles 23.1.0
alabaster 0.7.13
anyio 3.6.2
argon2-cffi 21.3.0
argon2-cffi-bindings 21.2.0
arrow 1.2.3
asttokens 2.2.1
attrs 23.1.0
Babel 2.12.1
backcall 0.2.0
beautifulsoup4 4.12.2
bleach 6.0.0
Brlapi 0.8.4
build 0.10.0
cffi 1.15.1
charset-normalizer 3.1.0
comm 0.1.2
contourpy 1.0.7
cycler 0.11.0
debugpy 1.6.7
decorator 5.1.1
defusedxml 0.7.1
distro 1.8.0
docutils 0.19
exceptiongroup 1.0.0
executing 1.2.0
fastjsonschema 2.16.3
fonttools 4.39.3
fqdn 1.5.1
gpg 1.20.0
html5lib 1.1
idna 3.4
imagesize 1.4.1
importlib-metadata 6.6.0
importlib-resources 5.12.0
iniconfig 2.0.0
installer 0.7.0
ipykernel 6.23.0
ipython 8.12.0
ipython-genutils 0.2.0
ipywidgets 8.0.3
isoduration 20.11.0
jedi 0.18.2
Jinja2 3.1.2
json5 0.9.12
jsonpointer 2.2
jsonschema 4.17.3
jupyter_client 8.2.0
jupyter_core 5.3.0
jupyter-events 0.6.3
jupyter_server 2.5.0
jupyter_server_fileid 0.9.0
jupyter_server_terminals 0.4.4
jupyter_server_ydoc 0.8.0
jupyter-ydoc 1.0.2
jupyterlab 3.6.3
jupyterlab-pygments 0.1.2
jupyterlab_server 2.22.1
jupyterlab-widgets 3.0.2
kiwisolver 1.4.4
libcomps 0.1.19
louis 3.25.0
MarkupSafe 2.1.2
matplotlib 3.6.3
matplotlib-inline 0.1.6
mistune 2.0.5
nbclassic 0.5.6
nbclient 0.7.4
nbconvert 7.3.1
nbformat 5.8.0
nest-asyncio 1.5.6
notebook 6.5.4
notebook_shim 0.2.3
numpy 1.24.3
olefile 0.46
packaging 23.1
pandas 2.0.1
pandocfilters 1.5.0
parso 0.8.3
pexpect 4.8.0
pickleshare 0.7.5
Pillow 9.5.0
pkgutil_resolve_name 1.3.10
platformdirs 3.5.0
pluggy 1.0.0
ply 3.11
prometheus-client 0.16.0
prompt-toolkit 3.0.38
psutil 5.9.2
ptyprocess 0.7.0
pure-eval 0.2.2
pycparser 2.21
Pygments 2.15.1
PyGObject 3.44.1
pyparsing 3.0.9
pyproject_hooks 1.0.0
pyrsistent 0.19.3
pytest 7.3.1
python-dateutil 2.8.2
python-json-logger 2.0.7
pytz 2023.2
PyYAML 6.0
pyzmq 24.0.1
requests 2.28.2
rfc3339-validator 0.1.4
rfc3986-validator 0.1.1
SciPy 1.8.1
Send2Trash 1.8.2
setuptools 67.7.2
six 1.16.0
sniffio 1.3.0
snowballstemmer 2.2.0
soupsieve 2.4.1
Sphinx 6.2.1
sphinxcontrib-applehelp 1.0.4
sphinxcontrib-devhelp 1.0.2.dev20230415
sphinxcontrib-htmlhelp 2.0.0
sphinxcontrib-jsmath 1.0.1.dev20230415
sphinxcontrib-qthelp 1.0.3.dev20230415
sphinxcontrib-serializinghtml 1.1.5
stack-data 0.6.2
tenacity 8.2.2
terminado 0.17.1
tinycss2 1.2.1
tomli 2.0.1
tornado 6.2
traitlets 5.9.0
typing_extensions 4.5.0
uri-template 1.2.0
urllib3 1.26.15
wcwidth 0.2.6
webcolors 1.13
webencodings 0.5.1
websocket-client 1.5.1
wheel 0.40.0
widgetsnbextension 4.0.7
xarray 2022.12.0
y-py 0.6.1
ypy-websocket 0.8.4
zipp 3.15.0
```
</details>
|
closed
|
2023-05-09T13:16:38Z
|
2023-05-09T13:29:02Z
|
https://github.com/plotly/plotly.py/issues/4198
|
[] |
kloczek
| 1
|
plotly/dash
|
dash
| 3,022
|
output not being updated by callback
|
Thank you so much for helping improve the quality of Dash!
We do our best to catch bugs during the release process, but we rely on your help to find the ones that slip through.
**Describe your context**
Google Colab Pro
- replace the result of `pip list | grep dash` below
```
dash 2.18.1
dash-bootstrap-components 1.6.0
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
- if frontend related, tell us your Browser, Version and OS
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Describe the bug**
Using app Dash with dcc.Input() callback I noticed output is not updated accordingly. No error message.
If I input "hello" in text box, output should be updated to "hello", instead initial value remains.
**Expected behavior**
If I input "hello" in text box, output should be updated to "hello", instead initial value remains..
**Screenshots**
```
app = JupyterDash(__name__)
port = 8053
"""authenticate using token"""
!ngrok config add-authtoken 2S97GELhJsGO---token---koHb24wsr #Configure authtoken
"""Open a ngrok tunnel to the HTTP server"""
public_url = ngrok.connect(port).public_url
print(f'* ngrok tunnel {public_url} -> http://127.0.0.1:{port}')
"""configure app layout"""
app.layout = html.Div(
children=[
html.H5('change value in text box'),
html.Div(["input: ", dcc.Input(id='my-input', value='initial-value', type='text')]),
html.Div(id='my-output'),
])
"""define callback"""
@app.callback(
Output(component_id='my-output', component_property='children'),
Input(component_id='my-input', component_property='value')
)
def update_input(text_input):
return print(f'output: {text_input}')
if __name__ == "__main__":
app.run_server(debug=False, jupyter_server_url=public_url, port=port, mode='external')
```
|
closed
|
2024-10-02T14:31:05Z
|
2024-10-04T16:56:16Z
|
https://github.com/plotly/dash/issues/3022
|
[
"bug",
"P3"
] |
DSAGRO3F
| 3
|
voila-dashboards/voila
|
jupyter
| 897
|
Using Javascript to Edit Voila pages
|
Is it possible to use Javascript to edit widgets and have the widget state be updated?
e.g. use case here is I have a text widget that I would like to populate from LocalStorage after the page loads with Javascript. When I grab the DOM and edit `.value`, it doesn't actually update the widget model state.
|
open
|
2021-06-01T19:20:29Z
|
2021-06-23T13:27:11Z
|
https://github.com/voila-dashboards/voila/issues/897
|
[] |
lauralindy
| 1
|
KaiyangZhou/deep-person-reid
|
computer-vision
| 471
|
Auxiliary loss
|
Many thanks to all the contributors for this project.
In your paper https://arxiv.org/pdf/1905.00953.pdf you use the cross-entropy loss as the main objective and the triplet loss as an auxiliary loss with a balancing weight (which needs to be tuned) to get the result showed in table 12e.
How can I apply this auxiliary loss to train the model.
Thanks!
|
open
|
2021-10-26T01:48:36Z
|
2021-11-29T12:15:17Z
|
https://github.com/KaiyangZhou/deep-person-reid/issues/471
|
[] |
Thangbluee
| 2
|
laurentS/slowapi
|
fastapi
| 31
|
Ratelimit doesn't work after token expires
|
I am rate limiting on two approaches:
i)based on IP address (for endpoint having no access token)
Works fine
ii) based on user id ( obtained from JWT token)
I have used https://pypi.org/project/fastapi-jwt-auth/ for JWTAuth. On the basis of #25 , in limiter.py
```
from slowapi import Limiter, _rate_limit_exceeded_handler
from slowapi.util import get_remote_address
from slowapi.errors import RateLimitExceeded
from app.utility.config import DEFAULT_RATE_LIMIT
from starlette.requests import Request
from starlette.responses import JSONResponse, Response
from fastapi_jwt_auth import AuthJWT
from app.core.security.security_utils import decrypt_data
class LimiterClass:
def __init__(self):
#redis used locally
self.limiter = Limiter(key_func=get_user_id_or_ip, strategy="moving-window", default_limits=[DEFAULT_RATE_LIMIT])
"""
Method : Get user_id for JWT access token , IP address for not having token
@Param :
1. request : type-> Request
Return: User id or IP address
"""
def get_user_id_or_ip(request : Request):
authorize = AuthJWT(request) # initial instance fastapi-jwt-auth
authorize.jwt_optional() # for validation jwt token
return decrypt_data(authorize.get_jwt_subject()) or get_remote_address
```
This works fine on unexpired JWT access token and ratelimits user. But when JWT access token expires, then it throws an error.
```
ERROR:uvicorn.error:Exception in ASGI application
Traceback (most recent call last):
File "C:\****\anaconda3\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 394, in run_asgi
result = await app(self.scope, self.receive, self.send)
File "C:\****\anaconda3\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 45, in __call__
return await self.app(scope, receive, send)
File "C:\****\anaconda3\lib\site-packages\fastapi\applications.py", line 199, in __call__
await super().__call__(scope, receive, send)
File "C:\****\anaconda3\lib\site-packages\starlette\applications.py", line 111, in __call__
await self.middleware_stack(scope, receive, send)
File "C:\****\anaconda3\lib\site-packages\starlette\middleware\errors.py", line 181, in __call__
raise exc from None
File "C:\****\anaconda3\lib\site-packages\starlette\middleware\errors.py", line 159, in __call__
await self.app(scope, receive, _send)
File "C:\****\anaconda3\lib\site-packages\starlette\middleware\base.py", line 25, in __call__
response = await self.dispatch_func(request, self.call_next)
File "C:\****\anaconda3\lib\site-packages\slowapi\middleware.py", line 51, in dispatch
return exception_handler(request, e)
File "C:\****l\anaconda3\lib\site-packages\slowapi\extension.py", line 88, in _rate_limit_exceeded_handler
{"error": f"Rate limit exceeded: {exc.detail}"}, status_code=429
AttributeError: 'JWTDecodeError' object has no attribute 'detail'
```
When accesstoken expires, I need to return HTTP 403 Forbidden access message.
|
open
|
2021-01-08T15:01:19Z
|
2022-11-28T10:35:26Z
|
https://github.com/laurentS/slowapi/issues/31
|
[] |
himalacharya
| 4
|
Evil0ctal/Douyin_TikTok_Download_API
|
web-scraping
| 582
|
[BUG] 解析抖音分p视频时识别为分p图片
|
***发生错误的平台?***
如:抖音
***发生错误的端点?***
如:Web APP
***提交的输入值?***
如:[短视频链接](https://v.douyin.com/i5g9Anws/&minimal=false)
***是否有再次尝试?***
如:是
***你有查看本项目的自述文件或接口文档吗?***
如:有,并且很确定该问题是程序导致的。
|
open
|
2025-03-12T08:50:04Z
|
2025-03-12T08:50:04Z
|
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/582
|
[
"BUG"
] |
miaoxutao123
| 0
|
brightmart/text_classification
|
nlp
| 64
|
a1_seq2seq_attention_train
|
当运行a1_seq2seq_attention_train.py文件时 遇见下面的错误。希望得到您的帮助。
ValueError: Variable W_initial_state1 already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at:
File "/home/qiu/PycharmProjects/text_classification-master/a06_Seq2seqWithAttention/a1_seq2seq_attention_model.py", line 173, in instantiate_weights
self.W_initial_state1 = tf.get_variable("W_initial_state1", shape=[self.hidden_size, self.hidden_size*2], initializer=self.initializer)
File "/home/qiu/PycharmProjects/text_classification-master/a06_Seq2seqWithAttention/a1_seq2seq_attention_model.py", line 40, in __init__
self.instantiate_weights()
File "/home/qiu/PycharmProjects/text_classification-master/a06_Seq2seqWithAttention/a1_seq2seq_attention_model.py", line 233, in test
vocab_size, embed_size,hidden_size, is_training,decoder_sent_length=decoder_sent_length,l2_lambda=l2_lambda)
|
open
|
2018-06-27T11:29:40Z
|
2018-06-27T15:03:09Z
|
https://github.com/brightmart/text_classification/issues/64
|
[] |
tangdouer
| 1
|
yt-dlp/yt-dlp
|
python
| 12,613
|
Vimeo asking for embedding URL when unnecessary
|
### Checklist
- [x] I'm reporting that yt-dlp is broken on a **supported** site
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [x] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766), [the FAQ](https://github.com/yt-dlp/yt-dlp/wiki/FAQ), and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=is%3Aissue%20-label%3Aspam%20%20) for similar issues **including closed ones**. DO NOT post duplicates
- [x] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Monaco
### Provide a description that is worded well enough to be understood
When you try to download https://player.vimeo.com/video/238947119, you get an error saying that you need to specify the URL of the page this video was embedded in. But that is unnecessary in this case, because if you strip the "player" part of the domain, i.e. https://vimeo.com/video/238947119, the video downloads fine.
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [x] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://player.vimeo.com/video/238947119']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2025.02.19 from yt-dlp/yt-dlp [4985a4041]
[debug] Lazy loading extractors is disabled
[debug] Python 3.12.9 (CPython x86_64 64bit) - Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35 (OpenSSL 3.0.2 15 Mar 2022, glibc 2.35)
[debug] exe versions: ffmpeg 4.4.2 (setts), ffprobe 4.4.2
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2025.01.31, curl_cffi-0.7.1, mutagen-1.47.0, requests-2.32.3, secretstorage-3.3.1, sqlite3-3.37.2, urllib3-2.3.0, websockets-13.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Plugin directories: none
[debug] Loaded 1844 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2025.02.19 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2025.02.19 from yt-dlp/yt-dlp)
[vimeo] Extracting URL: https://player.vimeo.com/video/238947119
[vimeo] 238947119: Downloading webpage
ERROR: [vimeo] 238947119: Cannot download embed-only video without embedding URL. Please call yt-dlp with the URL of the page that embeds this video.
File "/home/ubuntu/.local/lib/python3.12/site-packages/yt_dlp/extractor/common.py", line 747, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/.local/lib/python3.12/site-packages/yt_dlp/extractor/vimeo.py", line 966, in _real_extract
raise ExtractorError(
```
|
open
|
2025-03-15T00:31:33Z
|
2025-03-19T13:08:23Z
|
https://github.com/yt-dlp/yt-dlp/issues/12613
|
[
"question"
] |
emv33
| 3
|
PaddlePaddle/ERNIE
|
nlp
| 209
|
请问有没有预训练中文XL-NET的计划呢?
|
XL-NET现在还没有中文版, 效果应该会比BERT和ERNIE要好.
|
closed
|
2019-07-17T06:42:14Z
|
2019-07-24T11:46:11Z
|
https://github.com/PaddlePaddle/ERNIE/issues/209
|
[] |
daizh
| 2
|
mage-ai/mage-ai
|
data-science
| 5,632
|
[BUG] [api.views] Action: list kernels None, no code is executed when running mage-ai from command line
|
### Mage version
0.9.75
### Describe the bug
While running mage-ai, locally from the command line "mage start ben", I can't execute any code from the browser, the execution button spins for ever and the python button turns yellow. I can't even restart the kernel.
The issue was reported previously and was mentioned it is addressed already.
This is my windows conda set up:
Python 3.12.8 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 16:48:34) [MSC v.1929 64 bit (AMD64)] on win32
Running via docker, mage-ai latest image, everything works fine.
The same issue occurs with conda python 3.11.
Many thanks for looking into it.
### To reproduce
Just create a new conda venv, pip install mage-ai and run it form the command line mageai start xyz.
Tested against python 3.11 and 3.12 versions.
### Expected behavior
To see the data in the browser or any error.
### Screenshots
<img width="719" alt="mageai_infinitive_load" src="https://github.com/user-attachments/assets/82875002-99eb-4a9b-8621-094bc72f76fd" />
<img width="779" alt="mageai_infinitive_load_backend" src="https://github.com/user-attachments/assets/463dbe60-7594-498c-b4d8-b086d005b4ce" />
<img width="773" alt="mageai_infinitive_load_frontend_network" src="https://github.com/user-attachments/assets/c9b42b97-7738-45ee-ad41-e18de0cc94ea" />
<img width="780" alt="mageai_infinitive_load_frontend_network_request" src="https://github.com/user-attachments/assets/dcddb6ce-10c5-4191-b5e4-4e3e68acdf3b" />
<img width="623" alt="mageai_infinitive_load_frontend_network_response" src="https://github.com/user-attachments/assets/a5d07491-0b6b-4335-8975-0ac60fd62cf5" />
### Operating system
windows 11
http://127.0.0.1:6789/overview?tab=today[](url)
### Additional context
_No response_
|
open
|
2025-01-05T20:19:44Z
|
2025-01-05T20:21:36Z
|
https://github.com/mage-ai/mage-ai/issues/5632
|
[
"bug"
] |
mapapa
| 0
|
mwaskom/seaborn
|
pandas
| 3,485
|
seaborn\_oldcore.py:1498: FutureWarning: is_categorical_dtype is deprecated and will be removed in a future version. Use isinstance(dtype, CategoricalDtype) instead
|
I get this warning all the time in every plot I do with a panda data frame. I use Seaborn 0.12.2
`seaborn\_oldcore.py:1498: FutureWarning: is_categorical_dtype is deprecated and will be removed in a future version. Use isinstance(dtype, CategoricalDtype) instead`
|
closed
|
2023-09-21T01:06:32Z
|
2023-09-21T12:17:25Z
|
https://github.com/mwaskom/seaborn/issues/3485
|
[] |
kiasar
| 1
|
sloria/TextBlob
|
nlp
| 157
|
Assign Prior Probabilities to NaiveBayesClassifier train_set(word)
|
Does anyone know how to assign prior probability to certain words in a train_set for better classification using a NaiveBayesClassifier
For example:
If i have a sentence/train_set like --> ('I have 10 apples', 'fruits')
I want to assign the word 'apples' more probability so that the single word should determine the label as fruits.
|
open
|
2017-04-01T06:14:19Z
|
2017-04-01T06:14:19Z
|
https://github.com/sloria/TextBlob/issues/157
|
[] |
nakuldahiwade
| 0
|
skfolio/skfolio
|
scikit-learn
| 11
|
[BUG] Annualization factor with 252 business days in a year
|
**Describe the bug**
In all metrics the 255 business days standard is used.
On average US based calendar has 252 business days in a year.
While I understand this depends on the country and on the specific year, most annualization metrics should at least be clear about it.
**Expected behavior**
Annualization should be based on typically 252 business days for US based user.
|
closed
|
2024-01-17T08:59:34Z
|
2024-01-22T20:10:09Z
|
https://github.com/skfolio/skfolio/issues/11
|
[
"bug"
] |
CarloNicolini
| 2
|
akfamily/akshare
|
data-science
| 5,695
|
接口问题:有没有 INE 的持仓排名数据?
|
有没有这个接口??:futures_ine_position_rank(date='20250218')
or
有没有这么接口??:get_ine_rank_table(date='20250218')
|
closed
|
2025-02-18T15:40:11Z
|
2025-02-19T08:56:26Z
|
https://github.com/akfamily/akshare/issues/5695
|
[] |
yufusuzi
| 0
|
databricks/spark-sklearn
|
scikit-learn
| 12
|
Training large number of models
|
Hi,
This looks fantastic and looks to almost solve one of my problems.
Essentially I have a model to predict the amount of sales of a particular product for each day given various features (historical sales volumes, type of store, day of week, time of year, etc.). The training set fits easily in memory,
I want to train this model for 1000's of different products (we need a model per product)
The features and model used are the same. The exact values of the features will change per product.
In pseudo code this looks like:
```
for (product in products) { # We can parallelise this loop on a single machine, but could we parallelise over spark?
# Extract train dataset from hive over ODBC
# Train model
# Output predictions
}
```
Any thoughts if spark-sklearn could be used to support this?
|
closed
|
2016-02-10T12:35:06Z
|
2016-03-17T19:02:28Z
|
https://github.com/databricks/spark-sklearn/issues/12
|
[] |
danielnee
| 3
|
pydantic/logfire
|
fastapi
| 886
|
Run linting (especially type checking) and pyodide tests in daily CI
|
To avoid cases like https://github.com/pydantic/logfire/issues/884
|
open
|
2025-02-22T12:15:40Z
|
2025-02-22T12:15:40Z
|
https://github.com/pydantic/logfire/issues/886
|
[] |
alexmojaki
| 0
|
strawberry-graphql/strawberry
|
graphql
| 3,805
|
Playground fails to load due to `packaging` version incompatibility
|
<!-- Provide a general summary of the bug in the title above. -->
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Describe the Bug
The playground gets stuck showing "loading pyodide..."
`prodide.asm.js` dumps the following stack trace:
```
Uncaught (in promise) PythonError: Traceback (most recent call last):
File "/lib/python311.zip/_pyodide/_base.py", line 573, in eval_code_async
await CodeRunner(
File "/lib/python311.zip/_pyodide/_base.py", line 395, in run_async
await coroutine
File "<exec>", line 9, in <module>
File "/lib/python3.11/site-packages/micropip/_commands/install.py", line 142, in install
await transaction.gather_requirements(requirements)
File "/lib/python3.11/site-packages/micropip/transaction.py", line 204, in gather_requirements
await asyncio.gather(*requirement_promises)
File "/lib/python3.11/site-packages/micropip/transaction.py", line 211, in add_requirement
return await self.add_requirement_inner(Requirement(req))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.11/site-packages/micropip/transaction.py", line 300, in add_requirement_inner
await self._add_requirement_from_package_index(req)
File "/lib/python3.11/site-packages/micropip/transaction.py", line 347, in _add_requirement_from_package_index
await self.add_wheel(wheel, req.extras, specifier=str(req.specifier))
File "/lib/python3.11/site-packages/micropip/transaction.py", line 385, in add_wheel
await self.gather_requirements(wheel.requires(extras))
File "/lib/python3.11/site-packages/micropip/transaction.py", line 204, in gather_requirements
await asyncio.gather(*requirement_promises)
File "/lib/python3.11/site-packages/micropip/transaction.py", line 208, in add_requirement
return await self.add_requirement_inner(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.11/site-packages/micropip/transaction.py", line 290, in add_requirement_inner
satisfied, ver = self.check_version_satisfied(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.11/site-packages/micropip/transaction.py", line 235, in check_version_satisfied
raise ValueError(
ValueError: Requested 'packaging>=24', but packaging==23.1 is already installed
```
This probably relates to #3803 which added the `packaging>=24` requirement
## System Information
- Operating system: MacOS + Chrome/Firefox
- Strawberry version (if applicable): 0.262.2 (via https://play.strawberry.rocks/)
## Additional Context
I tried changing the strawberry version in the corner to an earlier version, but that didn't fix it.
|
closed
|
2025-03-12T19:29:20Z
|
2025-03-13T09:23:11Z
|
https://github.com/strawberry-graphql/strawberry/issues/3805
|
[
"bug"
] |
jacobmoshipco
| 1
|
horovod/horovod
|
tensorflow
| 3,224
|
Segmentation fault in horovod_shutdown when running a job along several nodes
|
**Environment:**
1. Framework: tensorflow-**cpu**
2. Framework version: 2.6.0
3. Horovod version: v0.23.0
4. MPI version: 4.1.2 (locally built)
5. CUDA version: None - I want to use CPU training to get information about the MPI communications.
6. NCCL version: None - Same as above.
7. Python version: 3.9.6 (Installed via pyenv, **I don't have root access** on this cluster I try to run Horovod on)
8. Spark / PySpark version: None
9. Ray version: None
10. OS and version: Ubuntu 16.04.3 LTS
11. GCC version: 5.4.0
12. CMake version: 3.18.4
Hello, I'm trying to execute the benchmark `tf_cnn_benchmarks.py` as follows:
```
mpirun --display-map \
$TRANSP \
-bind-to none -map-by node \
-x LD_LIBRARY_PATH -x PATH -x HOROVOD_MPI_THREADS_DISABLE=1 \
python $benchmark --variable_update horovod --batch_size 64 --horovod_device cpu --device=CPU
```
I don't specify `-N` or `-np` parameters to `mpirun` because they are passed by Slurm (I configured ompi with the `--with-slurm` option, although the issue persisted before and after that). When running other toy MPI programs everything works out of the box this way. I need to use the `mpirun`-type command because I use a wrapper to it which uses PMPI and folllows the same syntax; so using `horovodrun` is out of question.
The program executes correctly when running on a single cluster node, however, when I run it over several nodes I get a segmentation fault before finishing:
```
Thread 0x00007fd1c07f9700 (most recent call first):
File "/home/msanchez/.pyenv/versions/3.9.6/lib/python3.9/site-packages/horovod/common/basics.py", line 149 in shutdown
Fatal Python error: Segmentation fault
```
I only get the trace this deep in calls, so I guess the library it references is somehow not being appropriately dyn-linked?
I have the setup automated because some other colleages may need to run horovod too, and asure we have the same environment:
```
$ pip3 install --no-cache-dir --upgrade tensorflow-cpu
export HOROVOD_WITH_MPI=1
export HOROVOD_WITHOUT_GLOO=1
export HOROVOD_WITH_TENSORFLOW=1
export HOROVOD_WITHOUT_PYTORCH=1
export HOROVOD_WITHOUT_MXNET=1
export HOROVOD_CPU_OPERATIONS=MPI
```
I've tried insalling both via pip3 and building the source material:
```
$ git clone --recursive https://github.com/horovod/horovod.git /tmp/horovod_$USER
$ cd /tmp/horovod_$USER
$ python3 setup.py clean build sdist
$ pip3 install dist/horovod*tar.gz
```
```
pip3 install --no-cache-dir horovod[tensorflow]
```
When I clone the repo I do have the C++ code to build the library:
```
$ git clone https://github.com/horovod/horovod.git--recursive /tmp/horovod_$USER
$ cd /tmp/horovod_$USER
$ grep -iRI "horovod_shutdown"
horovod/common/basics.py: self.MPI_LIB_CTYPES.horovod_shutdown()
horovod/common/operations.cc:void horovod_shutdown() {
horovod/common/operations.h:void horovod_shutdown();
```
My `PATH` variable does contain Python, Python3, pip, pip3 (the ones installed via pyenv), ompi on `~/.local/bin`
My `LD_LIBRARY_PATH` includes `~/.local/lib` for ompi. Maybe there's a directory I'm missing? I'm checking standard and error output from compilation and I don't see any `.so` files being generated.
When I check the Horovod build everything seems okay.
```
$ horovodrun --check-build
Horovod v0.23.0:
Available Frameworks:
[X] TensorFlow
[X] PyTorch
[ ] MXNet
Available Controllers:
[X] MPI
[ ] Gloo
Available Tensor Operations:
[ ] NCCL
[ ] DDL
[ ] CCL
[X] MPI
[ ] Gloo
```
This is the location horovodrun is installed at:
```
$ which horovodrun
/home/msanchez/.pyenv/shims/horovodrun
```
Any help would be appreciated.
|
closed
|
2021-10-15T07:33:28Z
|
2022-11-10T11:38:35Z
|
https://github.com/horovod/horovod/issues/3224
|
[
"bug"
] |
msdlr
| 2
|
svc-develop-team/so-vits-svc
|
pytorch
| 140
|
[Help]: 使用文字转语音出现Error
|
### 请勾选下方的确认框。
- [X] 我已仔细阅读[README.md](https://github.com/svc-develop-team/so-vits-svc/blob/4.0/README_zh_CN.md)和[wiki中的Quick solution](https://github.com/svc-develop-team/so-vits-svc/wiki/Quick-solution)。
- [X] 我已通过各种搜索引擎排查问题,我要提出的问题并不常见。
- [X] 我未在使用由第三方用户提供的一键包/环境包。
### 系统平台版本号
Win10专业版21H2
### GPU 型号
NVIDIA GeForce RTX 3070 笔记本
### Python版本
3.8.16
### PyTorch版本
Version: 1.13.1+cu116
### sovits分支
4.0(默认)
### 数据集来源(用于判断数据集质量)
手游解包人物语音
### 出现问题的环节或执行的命令
webui文字转换
### 问题描述
下面直接一个Error
照着视频教程做的
f0选了
音频转音色可以成功,就是文字转换没法成功
可能是模型正在训练没完成的原因?

### 日志
```python
>> python inference_main.py -m "logs/44k/G_96000.pth" -c "configs/config.json" -n "1.wav" -t 0 -s "24"
load model(s) from hubert/checkpoint_best_legacy_500.pt
INFO:fairseq.tasks.hubert_pretraining:current directory is D:\VITS\so-vits-svc-4.0
INFO:fairseq.tasks.hubert_pretraining:HubertPretrainingTask Config {'_name': 'hubert_pretraining', 'data': 'metadata', 'fine_tuning': False, 'labels': ['km'], 'label_dir': 'label', 'label_rate': 50.0, 'sample_rate': 16000, 'normalize': False, 'enable_padding': False, 'max_keep_size': None, 'max_sample_size': 250000, 'min_sample_size': 32000, 'single_target': False, 'random_crop': True, 'pad_audio': False}
INFO:fairseq.models.hubert.hubert:HubertModel Config: {'_name': 'hubert', 'label_rate': 50.0, 'extractor_mode': default, 'encoder_layers': 12, 'encoder_embed_dim': 768, 'encoder_ffn_embed_dim': 3072, 'encoder_attention_heads': 12, 'activation_fn': gelu, 'layer_type': transformer, 'dropout': 0.1, 'attention_dropout': 0.1, 'activation_dropout': 0.0, 'encoder_layerdrop': 0.05, 'dropout_input': 0.1, 'dropout_features': 0.1, 'final_dim': 256, 'untie_final_proj': True, 'layer_norm_first': False, 'conv_feature_layers': '[(512,10,5)] + [(512,3,2)] * 4 + [(512,2,2)] * 2', 'conv_bias': False, 'logit_temp': 0.1, 'target_glu': False, 'feature_grad_mult': 0.1, 'mask_length': 10, 'mask_prob': 0.8, 'mask_selection': static, 'mask_other': 0.0, 'no_mask_overlap': False, 'mask_min_space': 1, 'mask_channel_length': 10, 'mask_channel_prob': 0.0, 'mask_channel_selection': static, 'mask_channel_other': 0.0, 'no_mask_channel_overlap': False, 'mask_channel_min_space': 1, 'conv_pos': 128, 'conv_pos_groups': 16, 'latent_temp': [2.0, 0.5, 0.999995], 'skip_masked': False, 'skip_nomask': False, 'checkpoint_activations': False, 'required_seq_len_multiple': 2, 'depthwise_conv_kernel_size': 31, 'attn_type': '', 'pos_enc_type': 'abs', 'fp16': False}
load
INFO:root:Loaded checkpoint 'logs/44k/G_96000.pth' (iteration 179)
#=====segment start, 3.88s======
vits use time:0.7570040225982666
#=====segment start, 0.067s======
jump empty segment
(sovits) PS D:\VITS\so-vits-svc-4.0> python --version
Python 3.8.16
(sovits) PS D:\VITS\so-vits-svc-4.0> edge-tts --text "Hello, world!" --write-media hello.mp3
WEBVTT
00:00:00.100 --> 00:00:00.537
Hello
00:00:00.787 --> 00:00:01.275
world
经过测试命令行音频转音频没问题,edge-tts没问题
```
### 截图`so-vits-svc`、`logs/44k`文件夹并粘贴到此处

### 补充说明
_No response_
|
closed
|
2023-04-11T08:45:12Z
|
2023-04-12T16:24:53Z
|
https://github.com/svc-develop-team/so-vits-svc/issues/140
|
[
"help wanted"
] |
EsawaAzusa
| 3
|
CTFd/CTFd
|
flask
| 2,496
|
Scoreboard shows "No solves yet", even though there are solves
|
<!--
If this is a bug report please fill out the template below.
If this is a feature request please describe the behavior that you'd like to see.
-->
**Environment**:
- CTFd Version/Commit: 3.7.0, commit: `6ce3eb10745283c57f2ef1aa1119ae46961a3bc3`
- Operating System: Arch Linux
- Web Browser and Version: Brave Browser 122.1.63.174
**What happened?**
Scoreboard shows "No solves yet". Even though there are solves

**What did you expect to happen?**
Should show scoreboard
**How to reproduce your issue**
visit /scoreboard
**Any associated stack traces or error logs**
Found this error in logs
```
2024-03-15T20:00:05.645078663Z [2024-03-15 20:00:05 +0000] [20] [ERROR] Error handling request
2024-03-15T20:00:05.645120601Z Traceback (most recent call last):
2024-03-15T20:00:05.645128737Z File "/opt/venv/lib/python3.11/site-packages/gunicorn/workers/base_async.py", line 113, in handle_request
2024-03-15T20:00:05.645138815Z resp.write_file(respiter)
2024-03-15T20:00:05.645146219Z File "/opt/venv/lib/python3.11/site-packages/gunicorn/http/wsgi.py", line 385, in write_file
2024-03-15T20:00:05.645154274Z if not self.sendfile(respiter):
2024-03-15T20:00:05.645157951Z ^^^^^^^^^^^^^^^^^^^^^^^
2024-03-15T20:00:05.645161298Z File "/opt/venv/lib/python3.11/site-packages/gunicorn/http/wsgi.py", line 375, in sendfile
2024-03-15T20:00:05.645164844Z self.sock.sendfile(respiter.filelike, count=nbytes)
2024-03-15T20:00:05.645168231Z File "/opt/venv/lib/python3.11/site-packages/gevent/_socket3.py", line 486, in sendfile
2024-03-15T20:00:05.645171497Z return self._sendfile_use_send(file, offset, count)
2024-03-15T20:00:05.645175344Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-03-15T20:00:05.645179743Z File "/opt/venv/lib/python3.11/site-packages/gevent/_socket3.py", line 416, in _sendfile_use_send
2024-03-15T20:00:05.645191805Z self._check_sendfile_params(file, offset, count)
2024-03-15T20:00:05.645197536Z File "/opt/venv/lib/python3.11/site-packages/gevent/_socket3.py", line 461, in _check_sendfile_params
2024-03-15T20:00:05.645202324Z raise ValueError(
2024-03-15T20:00:05.645207635Z ValueError: count must be a positive integer (got 0)
```
API request to `/api/v1/scoreboard/top/10` returns
```json
{"success": true, "data": {}}
```
|
closed
|
2024-03-15T20:15:47Z
|
2024-07-03T20:34:45Z
|
https://github.com/CTFd/CTFd/issues/2496
|
[] |
AndersFelde
| 1
|
thunlp/OpenPrompt
|
nlp
| 192
|
Can BART's <mask> token be treated the way it's done for BERT and RoBERTa in Prototypical Verbalizer implementation?
|
open
|
2022-09-07T04:50:30Z
|
2022-09-07T04:50:30Z
|
https://github.com/thunlp/OpenPrompt/issues/192
|
[] |
nirmal2k
| 0
|
|
scrapy/scrapy
|
web-scraping
| 6,365
|
Fix overridable methods in MediaPipeline
|
`MediaPipeline` defines several empty or almost empty "overridable" methods, which return things inconsistent with their overrides. I propose making all of them raise `NotImplementedError`. Alternatively `MediaPipeline` should just be made an abstract class and all those methods made abstract methods, but I have no idea if that will break anything (e.g. do all children always override all of those methods?).
Another problem is existing tests, that test specifically that e.g. `MediaPipeline.media_downloaded()` returns a response, which makes no sense to me (normally `media_downloaded()` returns a file info dict), so all those need to be changed or removed.
And another problem, indirectly related to this, is that this interface is very poorly documented, most of these functions are not mentioned in the docs at all, so it's not always clear what should they take and return (and the code uses many of them as callbacks in long callback chains so it's not clear even from the code).
|
closed
|
2024-05-15T15:03:25Z
|
2024-05-28T08:42:59Z
|
https://github.com/scrapy/scrapy/issues/6365
|
[
"bug"
] |
wRAR
| 1
|
hbldh/bleak
|
asyncio
| 876
|
Does bleak support LE CODED PHY?
|
closed
|
2022-07-11T11:23:52Z
|
2022-07-11T14:48:50Z
|
https://github.com/hbldh/bleak/issues/876
|
[] |
rty813
| 0
|
|
jupyter-incubator/sparkmagic
|
jupyter
| 645
|
%%spark config doesn't recognize official key name like "spark.driver.memory"
|
Hi,
I have checked help,
```
%spark ?
config
Override the livy session properties sent to Livy on session creation. All session creations will
contain these config settings from then on.
Expected value is a JSON key-value string to be sent as part of the Request Body for the POST /sessions
endpoint in Livy.
e.g. `%%spark config`
`{"driverMemory":"1000M", "executorCores":4}`
```
It only support config like `{"driverMemory":"1000M", "executorCores":4}` .
But refer to spark document
https://spark.apache.org/docs/latest/configuration.html

I think offical config should be support too .
Here I have a spark config,
"spark.driver.memory" -> "driverMemory" is clear .
"spark.driver.maxResultSize" -> "driverMaxResultSize" failed .
How to convert `spark.sql.session.timeZone` ?
```
%%spark config
{
"spark.driver.memory": "5g",
"spark.driver.maxResultSize": "5g",
"spark.executor.memory": "4g",
"spark.executor.memoryOverhead": "2g",
"spark.kryoserializer.buffer.max": 1024,
"spark.executor.instances": 6,
"spark.executor.cores": 3,
"spark.sql.session.timeZone":"Asia/Shanghai"
}
```
|
closed
|
2020-05-11T07:42:06Z
|
2020-10-11T17:26:20Z
|
https://github.com/jupyter-incubator/sparkmagic/issues/645
|
[] |
eromoe
| 2
|
gee-community/geemap
|
jupyter
| 1,487
|
Reorganize dependencies
|
Some proposed changes to the geemap dependencies:
- `requirements.txt`: this will include the core dependencies, such as earthengine-api, ipyleaflet, folium, ipyevents, etc.
- `requirements_extra.txt`: this will include extra dependencies, such as ee_extra, gdown, geeadd, ipytree, whiteboxgui.
- `requirements_all.txt`: this include all optional dependencies, such as geopandas, sankee, ffmpeg, etc.
As a result, installing geemap using pip will be as follows:
- `pip install geemap`
- `pip install geemap[extra]`
- `pip install geemap[all]`
The goal is to reduce the number of core required dependencies, paving the way for potential adoption by Google Colab.
|
closed
|
2023-04-06T02:14:08Z
|
2023-04-06T03:53:35Z
|
https://github.com/gee-community/geemap/issues/1487
|
[
"Feature Request"
] |
giswqs
| 2
|
explosion/spaCy
|
nlp
| 13,734
|
en_core_web_trf (3.8.0) ORG predictions seem inaccurate compared to en_core_web_trf (3.6.1)
|
<!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
en_core_web_trf (3.8.0) labels CARDINAL tokens as ORG. This happens in the affiliation sections for many scientific manuscript I tried out. Interestingly, only the transformer pipeline has this new unexpected behavior, NER from en_core_web_lg (3.8.0) works as expected.
## How to reproduce the behaviour
<!-- Include a code example or the steps that led to the problem. Please try to be as specific as possible. -->
```python
from pprint import pprint
import spacy
text = "Kelly E. Williams 1,2,3* , Kathryn P. Huyvaert 2 , Kurt C. Vercauteren 1 , Amy J. Davis 1 , Antoinette J. Piaggio 1\n1 USDA, Wildlife Services, National Wildlife Research Center, Wildlife Genetics Lab, 4101 Laporte Avenue, Fort Collins, CO, USA\n2 Department of Fish, Wildlife, and Conservation Biology, Colorado State University, Fort Collins, CO, 80523, USA\n3 School of Environmental and Forest Sciences, University of Washington, Seattle, WA, USA"
nlp = spacy.load("en_core_web_trf")
doc = nlp(text)
pprint([ent for ent in doc.ents if ent.label_ == "ORG"])
```
spaCy version 3.6.1 ( en_core_web_trf (3.6.1) ) returns:
```
[USDA,
Wildlife Services,
National Wildlife Research Center,
Wildlife Genetics Lab,
Department of Fish, Wildlife, and Conservation Biology,
Colorado State University,
School of Environmental and Forest Sciences,
University of Washington]
```
spaCy version 3.8.4 (en_core_web_trf (3.8.0)) returns:
```
[USDA,
Wildlife Services,
National Wildlife Research Center,
Wildlife Genetics Lab,
USA
,
2 Department of Fish, Wildlife,, <=== ORG instead ORDINAL for "2"
Colorado State University,
USA
,
3 School of Environmental and Forest Sciences, <=== ORG instead ORDINAL for "3"
University of Washington]
```
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
## Info about spaCy
- **spaCy version:** 3.8.4
- **Platform:** Linux-6.8.0-51-generic-x86_64-with-glibc2.39
- **Platform:** macOS-15.2-arm64-arm-64bit
- **Python version:** 3.11.11
- **Pipelines:** en_core_web_trf (3.8.0)
|
open
|
2025-01-25T04:56:21Z
|
2025-01-26T04:29:23Z
|
https://github.com/explosion/spaCy/issues/13734
|
[] |
vitaly-d
| 1
|
JaidedAI/EasyOCR
|
machine-learning
| 344
|
Stuck at Downloading Recognition Model
|
Hi, thanks for your hardwork.
I would like to ask a solution about my problem.
I stuck at '**100.0% Complete Downloading recognition model ...**' which is inside **_docker_**.
I have been waiting for hours and it's still persist.
Could you please help me why about this problem?
_P.S. I am using Ubuntu 20.04 and Python 3.8_

|
closed
|
2021-01-06T02:40:53Z
|
2023-08-26T10:40:04Z
|
https://github.com/JaidedAI/EasyOCR/issues/344
|
[] |
takeruadelbert
| 3
|
desec-io/desec-stack
|
rest-api
| 507
|
api: drop cron privileges
|
Should run as nobody
|
open
|
2021-01-08T23:41:25Z
|
2021-01-08T23:41:25Z
|
https://github.com/desec-io/desec-stack/issues/507
|
[
"bug",
"api"
] |
peterthomassen
| 0
|
satwikkansal/wtfpython
|
python
| 213
|
Dividing in python
|
In python 2
```
print(35/6)
```
OUTPUT:
```
5
```
In python 3
```
print(35/6)
```
OUTPUT:
```
5.833333333333333
```
|
closed
|
2020-07-25T03:44:04Z
|
2020-08-08T19:50:43Z
|
https://github.com/satwikkansal/wtfpython/issues/213
|
[] |
Yukti-09
| 2
|
ned2/slapdash
|
plotly
| 30
|
Use Waitress with Slapdash?
|
Slapdash is great for developing my Dash app.
Unfortunately, I am too much of a newbie to get slapdash working with waitress, on Windows 10 and Mac OSX.
I tried to point to the wsgi.py file.
I suppose I have to import waitress there, but I don’t know how.
Can you help me with that?
Best regards, Rob
|
closed
|
2020-02-25T09:33:03Z
|
2022-10-19T12:40:19Z
|
https://github.com/ned2/slapdash/issues/30
|
[] |
Quaternion66
| 1
|
amdegroot/ssd.pytorch
|
computer-vision
| 274
|
why nms method is so slow?
|
I try to change this code to implement video object detecction. I find
the test phase is slow.
Then I visualize the time cost. I find nms is key reason.
> nms time: 0.1834 sec.
> nms time: 0.1546 sec.
> nms time: 0.1525 sec.
> nms time: 0.1929 sec.
> nms time: 0.1670 sec.
> nms time: 0.1230 sec.
> nms time: 0.1894 sec.
> nms time: 0.1371 sec.
> nms time: 0.1648 sec.
> nms time: 0.1536 sec.
> nms time: 0.1382 sec.
> nms time: 0.1237 sec.
> nms time: 0.1603 sec.
> nms time: 0.1506 sec.
> nms time: 0.1619 sec.
> nms time: 0.1577 sec.
> nms time: 0.1479 sec.
> nms time: 0.1609 sec.
> nms time: 0.1520 sec.
> nms time: 0.1393 sec.
> nms time: 0.1535 sec.
> nms time: 0.1651 sec.
> nms time: 0.1500 sec.
> nms time: 0.1566 sec.
> nms time: 0.1418 sec.
> nms time: 0.1592 sec.
> nms time: 0.1417 sec.
> nms time: 0.1559 sec.
> nms time: 0.1424 sec.
> nms time: 0.1650 sec.
> cls time: 4.7556 sec.
My target is 30 classification, so it is so slow.
How can I fix it?
|
open
|
2018-12-13T03:26:38Z
|
2020-11-08T13:54:53Z
|
https://github.com/amdegroot/ssd.pytorch/issues/274
|
[] |
Feywell
| 3
|
ymcui/Chinese-LLaMA-Alpaca
|
nlp
| 377
|
如何用transformer加载量化后的模型
|
运行convert_and_quantize_chinese_alpaca_plus之后得到ggml-model-q8_0.bin文件
用transformer进行加载

结果报错

路径中的文件目录如下

pytorch_model.bin 是我把ggml-model-q8_0.bin改的名字
请问如何修改可以运行?
通过inference_hf脚本运营也是这样,但是在convert的文件中检测是可以正常输出的

|
closed
|
2023-05-18T11:16:37Z
|
2023-05-25T23:55:44Z
|
https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/377
|
[
"stale"
] |
hrz394943230
| 2
|
Kitware/trame
|
data-visualization
| 646
|
The member function remove_node() of class PipelineManager in advanced_git_tree example raise RecursionError
|
In the example advanced_git_tree, the function remove_node() can not work correctly.
The function
```python
1. def remove_node(self, _id):
2. for id in self._children_map[_id]:
3. self.remove_node(_id)
4. self._nodes.pop(_id)
5. self._update_hierarchy()
```
should be
```python
1. def remove_node(self, _id):
2. for id in self._children_map[_id]:
3. self.remove_node(id)
4. self._nodes.pop(_id)
5. self._update_hierarchy()
```
|
closed
|
2024-12-13T05:05:30Z
|
2024-12-14T16:40:57Z
|
https://github.com/Kitware/trame/issues/646
|
[] |
linson7017
| 3
|
aiortc/aiortc
|
asyncio
| 510
|
Transferring image to a client
|
I set up a tcp-signaling mechanism and sent an image frame as an array to a client. The frame is being received as aiortc.rtcrtpreceiver.RemoteStreamTrack object on the client side. I figured that I can use recv() method on it as RemoteStreamTrack is an instance of MediaStreamTrack and from the examples on the aiortc GitHub, specifically the server.py example in server folder, they use the same thing. I would then get a Frame object after using recv() method which I could convert to ndarray using frame.to_ndarray on that and retrieve the image data in array which can then easily be processed with numpy and opencv.
When my code runs and event handler “track” triggers, I trigger the above process of recv() and converting the Frame to array by calling a class that I wrote as an extension of RemoteStreamTrack. But the code gets stuck at specifically the await self.frame.recv() method in the class that I wrote. Is this the correct approach? If yes then how can I proceed and if not then what should be done? Thank You
|
closed
|
2021-03-16T00:16:53Z
|
2021-03-18T23:08:18Z
|
https://github.com/aiortc/aiortc/issues/510
|
[] |
iamkrs9
| 6
|
drivendataorg/cookiecutter-data-science
|
data-science
| 412
|
Cookiecutter installation problem
|
I am trying to download the cookiecutter structure onto vs code in order to arrange my project into a well defined structure. But I keep getting an error.
I managed to install the Cookie-cutter package itself (I used the - pip install cookiecutter-data-science). But right after that when I am trying to use commands like ccds/cookiecutter in order to create the project itself I get the following error -
ccds : The term 'ccds' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was
included, verify that the path is correct and try again.
At line:1 char:1
+ ccds
+ ~~~~
+ CategoryInfo : ObjectNotFound: (ccds:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException
How can I solve it?
|
closed
|
2025-01-01T12:42:31Z
|
2025-01-23T09:09:11Z
|
https://github.com/drivendataorg/cookiecutter-data-science/issues/412
|
[] |
Idelsohn
| 3
|
microsoft/hummingbird
|
scikit-learn
| 37
|
Cannot use torch==1.5.0 due to breaking change
|
When using torch==1.5.0 (instead of the current torch==1.4.0), hummingbird sometimes gets stuck in an infinte loop in `forward`:
```bash
File "/opt/conda/envs/rapids/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/root/hummingbird/hummingbird/operator_converters/_tree_implementations.py", line 349, in forward
gather_indices = torch.index_select(nodes, 0, prev_indices).view(-1, self.num_trees)
```
This is potentially due to this **[BC-BREAKING] change index_select scalar_check to retain dimensionality of input. #30790](https://github.com/pytorch/pytorch/pull/30790)** in torch==1.5.0 in the `index_select` function. The change makes it so they return a 0-dimensional tensor iff the input is 0-dimensional.
However, after digging around a bit more in our code, I separated out
```python
gather_indices = torch.index_select(nodes, 0, prev_indices).view(-1, self.num_trees)
```
into:
```python
gather_indices = torch.index_select(nodes, 0, prev_indices) # now gets stuck here
if gather_indices.shape == torch.Size([]):
gather_indices = gather_indices.view(-1)
gather_indices = gather_indices.view(-1, self.num_trees)
```
and found that the code is getting stuck on the `index_select` itself rather than any problem with the changed return type.
So maybe there is some issue related to **[optimize index_select performance on CPU with TensorIterator #30598](https://github.com/pytorch/pytorch/pull/30598)**, also new in torch==1.5
|
closed
|
2020-04-29T04:12:18Z
|
2020-08-11T23:28:23Z
|
https://github.com/microsoft/hummingbird/issues/37
|
[] |
ksaur
| 1
|
allenai/allennlp
|
pytorch
| 5,055
|
from_pretrained_transformer not called in commnad line predict mode with BasicClassifier
|
<!--
Please fill this template entirely and do not erase any of it.
We reserve the right to close without a response bug reports which are incomplete.
If you have a question rather than a bug, please ask on [Stack Overflow](https://stackoverflow.com/questions/tagged/allennlp) rather than posting an issue here.
-->
## Checklist
<!-- To check an item on the list replace [ ] with [x]. -->
- [X] I have verified that the issue exists against the `master` branch of AllenNLP.
- [X] I have read the relevant section in the [contribution guide](https://github.com/allenai/allennlp/blob/master/CONTRIBUTING.md#bug-fixes-and-new-features) on reporting bugs.
- [X] I have checked the [issues list](https://github.com/allenai/allennlp/issues) for similar or identical bug reports.
- [X] I have checked the [pull requests list](https://github.com/allenai/allennlp/pulls) for existing proposed fixes.
- [X] I have checked the [CHANGELOG](https://github.com/allenai/allennlp/blob/master/CHANGELOG.md) and the [commit log](https://github.com/allenai/allennlp/commits/master) to find out if the bug was already fixed in the master branch.
- [X] I have included in the "Description" section below a traceback from any exceptions related to this bug.
- [X] I have included in the "Related issues or possible duplicates" section below all related issues and possible duplicate issues (If there are none, check this box anyway).
- [X] I have included in the "Environment" section below the name of the operating system and Python version that I was using when I discovered this bug.
- [X] I have included in the "Environment" section below the output of `pip freeze`.
- [X] I have included in the "Steps to reproduce" section below a minimally reproducible example.
## Description
<!-- Please provide a clear and concise description of what the bug is here. -->
Hi, I'm running a very basic text classification experiment with pretrained transformers as embeddings. I implemented only a very basic the data reader and wrote the config file (see 'steps to reproduce'). I run everything from command line, `allennlp train`, `evaluate` and `predict`. `train` and `evaluate` work like a charm, while I get a `KeyError` exception when running the `predict` command. The error seems to be caused by the absence of the `tokens.txt` entry in the `vocabulary` folder of the trained model, which makes the namespace 'tokens' not to be present in the `vocab`, when called in `make_output_human_readable` of the `BasicClassifier`. The only namespaces available there are `labels` and `tags`.
The command I run is
```
allennlp predict experiments/bert_train__max_512__batch_16/model.tar.gz data/test.jsonl --include-package my_allennlp_pkg --cuda-device 3 --batch-size 8 --output-file test --use-dataset-reader
```
Traceback
<details>
<summary><b>Python traceback:</b></summary>
<p>
<!-- Paste the traceback from any exception (if there was one) in between the next two lines below -->
```
Traceback (most recent call last):
File "/path/to/my/env/env/allen210/bin/allennlp", line 8, in <module>
sys.exit(run())
File "/path/to/my/env/env/allen210/lib/python3.8/site-packages/allennlp/__main__.py", line 34, in run
main(prog="allennlp")
File "/path/to/my/env/env/allen210/lib/python3.8/site-packages/allennlp/commands/__init__.py", line 119, in main
args.func(args)
File "/path/to/my/env/env/allen210/lib/python3.8/site-packages/allennlp/commands/predict.py", line 239, in _predict
manager.run()
File "/path/to/my/env/env/allen210/lib/python3.8/site-packages/allennlp/commands/predict.py", line 206, in run
for model_input_instance, result in zip(batch, self._predict_instances(batch)):
File "/path/to/my/env/env/allen210/lib/python3.8/site-packages/allennlp/commands/predict.py", line 167, in _predict_instances
results = self._predictor.predict_batch_instance(batch_data)
File "/path/to/my/env/env/allen210/lib/python3.8/site-packages/allennlp/predictors/predictor.py", line 296, in predict_batch_instance
outputs = self._model.forward_on_instances(instances)
File "/path/to/my/env/env/allen210/lib/python3.8/site-packages/allennlp/models/model.py", line 185, in forward_on_instances
outputs = self.make_output_human_readable(self(**model_input))
File "/path/to/my/env/env/allen210/lib/python3.8/site-packages/allennlp/models/basic_classifier.py", line 166, in make_output_human_readable
[
File "/path/to/my/env/env/allen210/lib/python3.8/site-packages/allennlp/models/basic_classifier.py", line 167, in <listcomp>
self.vocab.get_token_from_index(token_id.item(), namespace=self._namespace)
File "/path/to/my/env/env/allen210/lib/python3.8/site-packages/allennlp/data/vocabulary.py", line 737, in get_token_from_index
return self._index_to_token[namespace][index]
KeyError: 101
```
</p>
</details>
In general, it seems like when running `allennlp predict` without specifying a predictor it does not read the pretrained model vocabulary inside the vocab object (thus not generating the namespace), or something similar.
### Attempted solutions
1. I tried inserting a vocabulary entry in the config as suggested in [4690](https://github.com/allenai/allennlp/issues/4690), re-creating the `model.tar.gz` with the so updated config file, but nothing changed.
```
"vocabulary": {
"type": "from_pretrained_transformer",
"model_name": "bart-base-cased"
}
```
2. I downloaded the `bert-base-cased` vocab from huggingface [here](https://huggingface.co/bert-base-cased/raw/main/vocab.txt) inside the `vocabulary` folder, named it `tokens.txt`, and re-created the `model.tar.gz` with this new `vocabulary` dir. The error changed to
<details>
<summary><b>Python traceback:</b></summary>
<p>
```
Traceback (most recent call last):
File "/path/to/my/env//allen210/bin/allennlp", line 8, in <module>
sys.exit(run())
File "/path/to/my/env/env/allen210/lib/python3.8/site-packages/allennlp/__main__.py", line 34, in run
main(prog="allennlp")
File "/path/to/my/env/env/allen210/lib/python3.8/site-packages/allennlp/commands/__init__.py", line 119, in main
args.func(args)
File "/path/to/my/env/env/allen210/lib/python3.8/site-packages/allennlp/commands/predict.py", line 224, in _predict
predictor = _get_predictor(args)
File "/path/to/my/env/env/allen210/lib/python3.8/site-packages/allennlp/commands/predict.py", line 115, in _get_predictor
archive = load_archive(
File "/path/to/my/env/env/allen210/lib/python3.8/site-packages/allennlp/models/archival.py", line 208, in load_archive
model = _load_model(config.duplicate(), weights_path, serialization_dir, cuda_device)
File "/path/to/my/env/env/allen210/lib/python3.8/site-packages/allennlp/models/archival.py", line 242, in _load_model
return Model.load(
File "/path/to/my/env/env/allen210/lib/python3.8/site-packages/allennlp/models/model.py", line 406, in load
return model_class._load(config, serialization_dir, weights_file, cuda_device)
File "/path/to/my/env/env/allen210/lib/python3.8/site-packages/allennlp/models/model.py", line 293, in _load
vocab = vocab_class.from_files(
File "/path/to/my/env/env/allen210/lib/python3.8/site-packages/allennlp/data/vocabulary.py", line 383, in from_files
vocab.set_from_file(filename, is_padded, namespace=namespace, oov_token=oov_token)
File "/path/to/my/env/env/allen210/lib/python3.8/site-packages/allennlp/data/vocabulary.py", line 511, in set_from_file
assert self._oov_token in self._token_to_index[namespace], "OOV token not found!"
AssertionError: OOV token not found!
```
</p>
</details>
which I suppose is due to the absence of the `@@UNKNOWN@@` token in the vocab. Inserting that token at the beginning or end of the said `tokens.txt` vocabulary would make the predict command to work, but I could notice a clear misalignment in the index-to-token conversion.
## Related issues or possible duplicates
This may be related to the general issues (partly solved) about loading a vocab from a pretrained (transformer) model, like in [4973](https://github.com/allenai/allennlp/issues/4937), [4958](https://github.com/allenai/allennlp/pull/4958), [4690](https://github.com/allenai/allennlp/issues/4690), [3456](https://github.com/allenai/allennlp/issues/3456)
## Environment
<!-- Provide the name of operating system below (e.g. OS X, Linux) -->
OS: Linux 3.10.0-1127.19.1.el7.x86_64
<!-- Provide the Python version you were using (e.g. 3.7.1) -->
Python version: 3.8.8
<details>
<summary><b>Output of <code>pip freeze</code>:</b></summary>
<p>
<!-- Paste the output of `pip freeze` in between the next two lines below -->
```
absl-py @ file:///home/conda/feedstock_root/build_artifacts/absl-py_1615404881292/work
aiohttp @ file:///home/conda/feedstock_root/build_artifacts/aiohttp_1605734406386/work
allennlp==2.1.0
argon2-cffi==20.1.0
async-generator==1.10
async-timeout==3.0.1
attrs @ file:///home/conda/feedstock_root/build_artifacts/attrs_1605083924122/work
backcall==0.2.0
bleach==3.3.0
blinker==1.4
blis==0.7.4
boto3==1.17.25
botocore==1.20.25
brotlipy==0.7.0
cachetools @ file:///home/conda/feedstock_root/build_artifacts/cachetools_1611555765219/work
catalogue==2.0.1
certifi==2020.12.5
cffi @ file:///tmp/build/80754af9/cffi_1613246945912/work
chardet==4.0.0
click==7.1.2
cryptography @ file:///home/conda/feedstock_root/build_artifacts/cryptography_1615405999564/work
cymem==2.0.5
decorator==4.4.2
defusedxml==0.7.1
entrypoints==0.3
filelock==3.0.12
google-auth @ file:///home/conda/feedstock_root/build_artifacts/google-auth_1608136875028/work
google-auth-oauthlib==0.4.1
grpcio @ file:///home/conda/feedstock_root/build_artifacts/grpcio_1604365522020/work
h5py==3.2.1
idna @ file:///home/conda/feedstock_root/build_artifacts/idna_1593328102638/work
importlib-metadata @ file:///home/conda/feedstock_root/build_artifacts/importlib-metadata_1615169443604/work
iniconfig==1.1.1
ipykernel==5.5.0
ipython==7.21.0
ipython-genutils==0.2.0
ipywidgets==7.6.3
jedi==0.18.0
Jinja2==2.11.3
jmespath==0.10.0
joblib==1.0.1
jsonlines==2.0.0
jsonnet==0.17.0
jsonpickle==2.0.0
jsonschema==3.2.0
jupyter==1.0.0
jupyter-client==6.1.11
jupyter-console==6.2.0
jupyter-core==4.7.1
jupyterlab-pygments==0.1.2
jupyterlab-widgets==1.0.0
lmdb==1.1.1
Markdown @ file:///home/conda/feedstock_root/build_artifacts/markdown_1614595805172/work
MarkupSafe==1.1.1
mistune==0.8.4
more-itertools==8.7.0
multidict @ file:///home/conda/feedstock_root/build_artifacts/multidict_1602413132207/work
murmurhash==1.0.5
nbclient==0.5.3
nbconvert==6.0.7
nbformat==5.1.2
nest-asyncio==1.5.1
nltk==3.5
notebook==6.2.0
numpy==1.20.1
oauthlib==3.0.1
overrides==3.1.0
packaging==20.9
pandas==1.2.3
pandocfilters==1.4.3
parso==0.8.1
pathy==0.4.0
pexpect==4.8.0
pickleshare==0.7.5
Pillow==8.1.2
pluggy==0.13.1
preshed==3.0.5
prometheus-client==0.9.0
prompt-toolkit==3.0.17
protobuf==3.15.5
ptyprocess==0.7.0
py==1.10.0
pyasn1==0.4.8
pyasn1-modules==0.2.7
pycparser @ file:///home/conda/feedstock_root/build_artifacts/pycparser_1593275161868/work
pydantic==1.7.3
Pygments==2.8.1
PyJWT @ file:///home/conda/feedstock_root/build_artifacts/pyjwt_1610910308735/work
pyOpenSSL @ file:///home/conda/feedstock_root/build_artifacts/pyopenssl_1608055815057/work
pyparsing==2.4.7
pyrsistent==0.17.3
PySocks @ file:///home/conda/feedstock_root/build_artifacts/pysocks_1610291447907/work
pytest==6.2.2
python-dateutil==2.8.1
pytz==2021.1
pyzmq==22.0.3
qtconsole==5.0.2
QtPy==1.9.0
regex==2020.11.13
requests @ file:///home/conda/feedstock_root/build_artifacts/requests_1608156231189/work
requests-oauthlib @ file:///home/conda/feedstock_root/build_artifacts/requests-oauthlib_1595492159598/work
rsa @ file:///home/conda/feedstock_root/build_artifacts/rsa_1614171254180/work
s3transfer==0.3.4
sacremoses==0.0.43
scikit-learn==0.24.1
scipy==1.6.1
Send2Trash==1.5.0
sentencepiece==0.1.95
six @ file:///home/conda/feedstock_root/build_artifacts/six_1590081179328/work
smart-open==3.0.0
spacy==3.0.5
spacy-legacy==3.0.1
srsly==2.4.0
tensorboard @ file:///home/conda/feedstock_root/build_artifacts/tensorboard_1610699261066/work/tensorboard-2.4.1-py3-none-any.whl
tensorboard-plugin-wit @ file:///home/conda/feedstock_root/build_artifacts/tensorboard-plugin-wit_1611075653546/work/tensorboard_plugin_wit-1.8.0-py3-none-any.whl
tensorboardX==2.1
terminado==0.9.2
testpath==0.4.4
thinc==8.0.2
threadpoolctl==2.1.0
tokenizers==0.10.1
toml==0.10.2
torch==1.7.1
torchvision==0.8.2
tornado==6.1
tqdm==4.59.0
traitlets==5.0.5
transformers==4.3.3
typer==0.3.2
typing-extensions @ file:///home/conda/feedstock_root/build_artifacts/typing_extensions_1602702424206/work
urllib3 @ file:///home/conda/feedstock_root/build_artifacts/urllib3_1611695416663/work
wasabi==0.8.2
wcwidth==0.2.5
webencodings==0.5.1
Werkzeug==1.0.1
widgetsnbextension==3.5.1
yarl @ file:///home/conda/feedstock_root/build_artifacts/yarl_1605429457708/work
zipp @ file:///home/conda/feedstock_root/build_artifacts/zipp_1614945704755/work
```
</p>
</details>
## Steps to reproduce
<details>
<summary><b>Example source:</b></summary>
<p>
<!-- Add a fully runnable example in between the next two lines below that will reproduce the bug -->
I have a very basic data reader
```
@DatasetReader.register("case_description")
class CaseDescriptionReader(DatasetReader):
def __init__(
self,
label: str,
token_indexers: Dict[str, TokenIndexer] = None,
tokenizer: Optional[Tokenizer] = None,
max_instances: Optional[int] = None,
serialization_dir: Optional[str] = None
) -> None:
super().__init__(max_instances=max_instances, serialization_dir=serialization_dir)
self._token_indexers = token_indexers or {"tokens": SingleIdTokenIndexer()}
self._tokenizer = tokenizer
self._label = label
@overrides
def _read(self, file_path):
tot_skipped = 0
if file_path.endswith(".jsonl"):
with jsonlines.open(file_path) as reader:
for i, doc in enumerate(reader):
instance = self.text_to_instance(text=doc["caseDescription"],
label=doc[self._label])
if instance is not None:
yield instance
@overrides
def text_to_instance(self, text: str, label: str) -> Optional[Instance]:
if not self._tokenizer:
tokens = text.split(" ")
tokens = [Token(t) for t in tokens]
else:
tokens = self._tokenizer.tokenize(text)
text_field = TextField(tokens)
fields: Dict[str, Field] = {"tokens": text_field, "label": LabelField(label)}
return Instance(fields)
@overrides
def apply_token_indexers(self, instance: Instance) -> None:
instance.fields["tokens"]._token_indexers = self._token_indexers # type: ignore
```
and my config file is as follow
```
{
"dataset_reader": {
"type": "case_description",
"label": "label",
"token_indexers": {
"tokens": {
"type": "pretrained_transformer",
"model_name": "bert-base-cased"
}
},
"tokenizer": {
"type": "pretrained_transformer",
"model_name": "bert-base-cased",
"max_length": 512
}
},
"train_data_path": "data/splits_no_material/train.jsonl",
"validation_data_path": "data/splits_no_material/val.jsonl",
"model": {
"type": "basic_classifier",
"text_field_embedder": {
"token_embedders": {
"tokens": {
"type": "pretrained_transformer",
"model_name": "bert-base-cased",
"train_parameters": true
}
}
},
"seq2vec_encoder": {
"type": "bert_pooler",
"pretrained_model": "bert-base-cased",
"dropout": 0.1
}
},
"data_loader": {
"type": "multiprocess",
"batch_size": 16
},
"trainer": {
...some trainer specs...
}
}
```
my dataset is a `jsonl` with lines like
```
{"caseDescription": "xxxxxxxx", "label": "yyyyyy"}
```
</p>
</details>
|
closed
|
2021-03-16T11:22:25Z
|
2021-03-24T15:03:46Z
|
https://github.com/allenai/allennlp/issues/5055
|
[
"bug"
] |
McKracken
| 7
|
sanic-org/sanic
|
asyncio
| 2,891
|
使用注入的方式封装获取的json数据,结果是空的
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Describe the bug
我封装了一个类,用来获取json数据,然后将这个类注入到app中。结果在获取的json是None。
I encapsulated a class to retrieve JSON data and then injected this class into the app. The result is that the obtained JSON is None.
### Code snippet
```python
# class
class JSonData:
data = None
def __init__(self, data: Optional[dict] = None):
self.data = data
@classmethod
def get_json_data(cls, request: Request):
try:
data = request.json or {}
except:
data = {}
return cls(data)
# -----------
# 获取请求体json数据
app.ext.add_dependency(JSonData, JSonData.get_json_data)
```
### Expected Behavior
request.json=None
### How do you run Sanic?
Sanic CLI
### Operating System
Windows
### Sanic Version
23.6.0
### Additional context
我在视图函数中打印request.json,结果就是有数据的,
应该是时间问题
I print request.json in the view function, and the result is that there is data,
It should be a matter of time
|
closed
|
2024-01-06T08:42:03Z
|
2024-05-31T06:35:54Z
|
https://github.com/sanic-org/sanic/issues/2891
|
[
"bug"
] |
f754699
| 1
|
trevismd/statannotations
|
seaborn
| 122
|
bump seaborn version
|
Current version require seaborn>=0.9.0,<0.12
However, there is a incompatibility between pandas 2.0 and seaborn which is solved only for seaborn >= 0.12.
would it be possible to bump the versions in the requirements?
Thanks
|
open
|
2023-05-26T17:41:54Z
|
2025-01-24T07:47:20Z
|
https://github.com/trevismd/statannotations/issues/122
|
[
"enhancement",
"help wanted"
] |
revesansparole
| 2
|
roboflow/supervision
|
computer-vision
| 1,429
|
Autodistill or Reparameterize?
|
### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
So Robovision provides a framework with Autodistill to transfer knowledge from larger foundational models into smaller models on custom data that runs faster. https://roboflow.com/train/yolo-world-and-yolov8. I'm just curious on the differences between this framework, and that of Reparameterization of Yolo-World with the same custom dataset to improve efficiency on custom datasets (https://github.com/AILab-CVC/YOLO-World/blob/master/docs/reparameterize.md). From the Yolo-World paper, it does seem that reparameterization, at least for coco dataset's vocabulary, does seem to perform slightly better with Yolov8-fine-tuned.

Just wondering are there any merits to both of the methods? Have anybody evaluated either of the approach and which would be the recommended approach? Thanks!
### Additional
_No response_
|
closed
|
2024-08-05T11:32:36Z
|
2024-08-06T09:37:31Z
|
https://github.com/roboflow/supervision/issues/1429
|
[
"question"
] |
adrielkuek
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.