repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
sqlalchemy/alembic
|
sqlalchemy
| 1,363
|
AttributeError: 'ConfigParser' object has no attribute '_proxies'
|
**Describe the bug**
After installing the `alembic` package, I entered the `alembic init migrations` command and alembic created the migrations directory and the `alembic.ini` file without any errors, but when I use the `alembic revision --autogenerate -m "Some message"` command, I get the error `AttributeError: 'ConfigParser' object has no attribute '_proxies'` and I don't know where the problem is. I also tested different versions of sqlalchemy and alembic, but the problem still persists. This error is also displayed when using the `alembic check` command.
Below is a picture of the error that occurred :

**Versions.**
- OS: Windows 10
- Python: 3.9
- Alembic: 1.12.1
- SQLAlchemy: 2.0.23
- Database: Sqlite
|
closed
|
2023-11-22T04:39:48Z
|
2023-11-22T16:01:34Z
|
https://github.com/sqlalchemy/alembic/issues/1363
|
[
"third party library / application issues"
] |
MosTafa2K
| 2
|
seleniumbase/SeleniumBase
|
web-scraping
| 2,098
|
Options with undetected driver
|
Hi,
How to add options chrome in undetected driver?
I dont see any example
Thanks
|
closed
|
2023-09-12T12:25:56Z
|
2023-09-12T15:43:01Z
|
https://github.com/seleniumbase/SeleniumBase/issues/2098
|
[
"question",
"UC Mode / CDP Mode"
] |
FranciscoPalomares
| 3
|
deepspeedai/DeepSpeed
|
deep-learning
| 6,044
|
Inference acceleration doesn't work
|
I'm trying to use DeepSpeed to accelerate a ViT model implemented by this [project](https://github.com/lucidrains/vit-pytorch). I following this [tutorial](https://www.deepspeed.ai/tutorials/inference-tutorial/) to optimize the model:
```
from vit_pytorch import ViT
import torch
import deepspeed
import time
def optimize_by_deepspeed(model):
ds_engine = deepspeed.init_inference(
model,
tensor_parallel={"tp_size": 1},
dtype=torch.float,
checkpoint=None,
replace_with_kernel_inject=True)
model = ds_engine.module
return model
device = "cuda:0"
model = ViT(
image_size=32,
patch_size=8,
channels=64,
num_classes=512,
dim=512,
depth=6,
heads=16,
mlp_dim=1024,
dropout=0.1,
emb_dropout=0.1
).to(device)
input_data = torch.randn(1, 64, 32, 32, dtype=torch.float32, device=device)
model = optimize_by_deepspeed(model)
start_time = time.time()
for i in range(1000):
res = model(input_data)
print("time consumption: {}".format(time.time() - start_time))
```
and launch the program with following command:
```
deepspeed --num_gpus 1 example.py
```
However, I found that the time consumption is roughly the same whether or not `model = optimize_by_deepspeed(model)` is executed. Is there any problem in my code?
|
closed
|
2024-08-17T09:13:52Z
|
2024-09-07T08:13:14Z
|
https://github.com/deepspeedai/DeepSpeed/issues/6044
|
[] |
broken-dream
| 2
|
Gerapy/Gerapy
|
django
| 255
|
Spider Works in Terminal Not in Gerapy
|
Before I start I just want to say that you all have done a great job developing this project. I love gerapy. I will probably start contributing to the project. I will try to document this as well as I can so it can be helpful to others.
**Describe the bug**
I have a scrapy project which runs perfectly fine in terminal using the following command:
`scrapy crawl examplespider`
However, when I schedule it in a task and run it on my local scrapyd client it runs but immediately closes. I don't know why it opens and closes without doing anything. Throws no errors. I think it's a config file issue. When I view the results of the job it shows the following:
```
`y.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2022-12-15 07:03:21 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2022-12-15 07:03:21 [scrapy.core.engine] INFO: Spider opened
2022-12-15 07:03:21 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2022-12-15 07:03:21 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2022-12-15 07:03:21 [scrapy.core.engine] INFO: Closing spider (finished)
2022-12-15 07:03:21 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'elapsed_time_seconds': 0.002359,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2022, 12, 15, 7, 3, 21, 314439),
'log_count/DEBUG': 1,
'log_count/INFO': 10,
'log_count/WARNING': 1,
'memusage/max': 63709184,
'memusage/startup': 63709184,
'start_time': datetime.datetime(2022, 12, 15, 7, 3, 21, 312080)}
2022-12-15 07:03:21 [scrapy.core.engine] INFO: Spider closed (finished)`
```
In the logs it shows the following:
**/home/ubuntu/env/scrape/bin/logs/examplescraper/examplespider**
```
2022-12-15 07:03:21 [scrapy.utils.log] INFO: Scrapy 2.7.1 started (bot: examplescraper)
2022-12-15 07:03:21 [scrapy.utils.log] INFO: Versions: lxml 4.9.1.0, libxml2 2.9.14, cssselect 1.2.0, parsel 1.7.0, w3lib 2.1.1, Twisted 22.10.0, Python 3.8.10 (default, Nov 14 2022, 12:59:47) - [GCC 9.4.0], pyOpenSSL 22.1.0 (OpenSSL 3.0.7 1 Nov 2022), cryptography 38.0.4, Platform Linux-5.15.0-1026-aws-x86_64-with-glibc2.29
2022-12-15 07:03:21 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'examplescraper',
'DOWNLOAD_DELAY': 0.1,
'LOG_FILE': 'logs/examplescraper/examplespider/8d623d447c4611edad0641137877ddff.log',
'NEWSPIDER_MODULE': 'examplespider.spiders',
'SPIDER_MODULES': ['examplespider.spiders'],
'USER_AGENT': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 '
'(KHTML, like Gecko) Chrome/101.0.4951.67 Safari/537.36'
}
2022-12-15 07:03:21 [py.warnings] WARNING: /home/ubuntu/env/scrape/lib/python3.8/site-packages/scrapy/utils/request.py:231: ScrapyDeprecationWarning: '2.6' is a deprecated value for the 'REQUEST_FINGERPRINTER_IMPLEMENTATION' setting.
It is also the default value. In other words, it is normal to get this warning if you have not defined a value for the 'REQUEST_FINGERPRINTER_IMPLEMENTATION' setting. This is so for backward compatibility reasons, but it will change in a future version of Scrapy.
See the documentation of the 'REQUEST_FINGERPRINTER_IMPLEMENTATION' setting for information on how to handle this
deprecation.
return cls(crawler)
2022-12-15 07:03:21 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.epollreactor.EPollReactor
2022-12-15 07:03:21 [scrapy.extensions.telnet] INFO: Telnet Password: b11a24faee23f82c
2022-12-15 07:03:21 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole
'scrapy.extensions.memusage.MemoryUsage',
'scrapy.extensions.logstats.LogStats']
2022-12-15 07:03:21 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2022-12-15 07:03:21 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2022-12-15 07:03:21 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2022-12-15 07:03:21 [scrapy.core.engine] INFO: Spider opened
2022-12-15 07:03:21 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2022-12-15 07:03:21 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2022-12-15 07:03:21 [scrapy.core.engine] INFO: Closing spider (finished)
2022-12-15 07:03:21 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'elapsed_time_seconds': 0.002359,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2022, 12, 15, 7, 3, 21, 314439),
'log_count/DEBUG': 1,
'log_count/INFO': 10,
'log_count/WARNING': 1,
'memusage/max': 63709184,
'memusage/startup': 63709184,
'start_time': datetime.datetime(2022, 12, 15, 7, 3, 21, 312080)
}
2022-12-15 07:03:21 [scrapy.core.engine] INFO: Spider closed (finished)
```
**/home/ubuntu/gerapy/logs**
```
ubuntu@ip-172-26-13-235:~/gerapy/logs$ cat 20221215065310.log
INFO - 2022-12-15 14:53:18,043 - process: 480 - scheduler.py - gerapy.server.core.scheduler - 105 - scheduler - successfully synced task with jobs with force
INFO - 2022-12-15 14:54:15,011 - process: 480 - scheduler.py - gerapy.server.core.scheduler - 34 - scheduler - execute job of client LOCAL, project examplescraper, spider examplespider
ubuntu@ip-172-26-13-235:~/gerapy/logs$
```
**To Reproduce**
Steps to reproduce the behavior:
1. AWS Ubuntu 20.04 Instance
2. Use python3 virtual environment and follow the installation instructions
3. Create a systemd service for scrapyd and gerapy by doing the following:
```
cd /lib/systemd/system
sudo nano scrapyd.service
```
paste the following:
```
[Unit]
Description=Scrapyd service
After=network.target
[Service]
User=ubuntu
Group=ubuntu
WorkingDirectory=/home/ubuntu/env/scrape/bin
ExecStart=/home/ubuntu/env/scrape/bin/scrapyd
[Install]
WantedBy=multi-user.target
```
Issue the following commands:
```
sudo systemctl enable scrapyd.service
sudo systemctl start scrapyd.service
sudo systemctl status scrapyd.service
```
It should say: **active (running)**
Create a script to run gerapy as a systemd service
```
cd ~/virtualenv/exampleproject/bin/
nano runserv-gerapy.sh
```
Paste the following:
```
#!/bin/bashcd
/home/ubuntu/virtualenv
source exampleproject/bin/activate
cd /home/ubuntu/gerapy
gerapy runserver 0.0.0.0:8000
```
Give this file execute permissions
`sudo chmod +x runserve-gerapy.sh`
Navigate back to systemd and create a service to run the runserve-gerapy.sh
```
cd /lib/systemd/system
sudo nano gerapy-web.service
```
Paste the following:
```
[Unit]
Description=Gerapy Webserver Service
After=network.target
[Service]
User=ubuntu
Group=ubuntu
WorkingDirectory=/home/ubuntu/virtualenv/exampleproject/bin
ExecStart=/bin/bash /home/ubuntu/virtualenv/exampleproject/bin/runserver-gerapy.sh
[Install]
WantedBy=multi-user.target
```
Again issue the following:
```
sudo systemctl enable gerapy-web.service
sudo systemctl start gerapy-web.service
sudo systemctl status gerapy-web.service
```
Look for **active (running)** and navigate to http://your.pub.ip.add:8000 or http://localhost:8000 or http://127.0.0.1:8000 to verify that it is running. Reboot the instance to verify that the services are running on system startup.
5. Log in and create a client for the local scrapyd service. Use IP 127.0.0.1 and Port 6800. No Auth. Save it as "Local" or "Scrapyd"
6. Create a project. Select Clone. For testing I used the following github scrapy project: https://github.com/eneiromatos/NebulaEmailScraper (actually a pretty nice starter project). Save the project. Build the project. Deploy the project. (If you get an error when deploying make sure to be running in the virtual env, you might need to reboot).
7. Create a task. Make sure the project name and spider name matches what is in the scrapy.cfg and examplespider.py files and save the task. Schedule the task. Run the task
**Traceback**
See logs above ^^^
**Expected behavior**
It should run for at least 5 minutes and output to a file called emails.json in the project root folder (the folder with scrapy.cfg file)
**Screenshots**
I can upload screenshots if requested.
**Environment (please complete the following information):**
- OS: AWS Ubuntu 20.04
- Browser Firefox
- Python Version 3.8
- Gerapy Version 0.9.11 (latest)
**Additional context**
Add any other context about the problem here.
|
open
|
2022-12-15T08:57:52Z
|
2022-12-17T19:18:58Z
|
https://github.com/Gerapy/Gerapy/issues/255
|
[
"bug"
] |
wmullaney
| 0
|
saulpw/visidata
|
pandas
| 1,768
|
[save_filetype] when save_filetype is set, you cannot gY as any other filetype on the DirSheet
|
**Small description**
I use `options.save_filetype = 'jsonl'` in my `.visidatarc`. I use this to set my default save type. However, sometimes I like to copy rows out as different file types, csv, fixed etc. and so I press gY and provide [e.g.] `copy 19 rows as filetype: fixed` but the response I get back is `"fixed" │ copying 19 rows to system clipboard as fixed │ saving 1 sheets to .fixed as jsonl` and the rows that are copied are in fact `jsonl`.
**Expected result**
To be able to set `options.save_filetype` and this be provided as the default option when copying and/or saving, however be able to save as different filetypes on-the-fly (as expected).
**Actual result with screenshot**
Shown above
**Steps to reproduce with sample data and a .vd**
1. Set `options.save_filetype = 'jsonl'` in `.visidatarc`
2. open a ~~file~~ directory (DirSheet) _< edited 2023-03-01_
3. Select some rows
4. Press `gY` (`syscopy-selected`)
5. enter "fixed"
|
closed
|
2023-02-27T19:17:42Z
|
2023-10-18T05:56:12Z
|
https://github.com/saulpw/visidata/issues/1768
|
[
"bug",
"fixed"
] |
geekscrapy
| 17
|
keras-team/keras
|
data-science
| 20,104
|
Tensorflow model.fit fails on test_step: 'NoneType' object has no attribute 'items'
|
I am using tf.data module to load my datasets. Although the training and validation data modules are almost the same. The train_step works properly and the training on the first epoch continues till the last batch, but in the test_step I get the following error:
```shell
353 val_logs = {
--> 354 "val_" + name: val for name, val in val_logs.items()
355 }
356 epoch_logs.update(val_logs)
358 callbacks.on_epoch_end(epoch, epoch_logs)
AttributeError: 'NoneType' object has no attribute 'items'
```
Here is the code for fitting the model:
```shell
results = auto_encoder.fit(
train_data,
epochs=config['epochs'],
steps_per_epoch=(num_train // config['batch_size']),
validation_data=valid_data,
validation_steps=(num_valid // config['batch_size'])-1,
callbacks=callbacks
)
```
I should mention that I have used .repeat() on both train_data and valid_data, so the problem is not with not having enough samples.
|
closed
|
2024-08-09T15:32:39Z
|
2024-08-10T17:59:05Z
|
https://github.com/keras-team/keras/issues/20104
|
[
"stat:awaiting response from contributor",
"type:Bug"
] |
JVD9kh96
| 2
|
napari/napari
|
numpy
| 6,744
|
Remove lambdas in help menu actions `_help_actions.py`
|
We should remove lambdas used in help menu actions in `napari/_app_model/actions/_help_actions.py`.
The callbacks are all calls to `webbrowser.open`. This is a case where having `kwargs` field in actions would be useful. (ref: https://github.com/pyapp-kit/app-model/issues/52)
|
closed
|
2024-03-13T11:21:08Z
|
2024-05-20T09:33:08Z
|
https://github.com/napari/napari/issues/6744
|
[
"maintenance"
] |
lucyleeow
| 0
|
autogluon/autogluon
|
computer-vision
| 4,181
|
Failed to load cask: libomp.rb
|
$ brew install libomp.rb
Error: Failed to load cask: libomp.rb
Cask 'libomp' is unreadable: wrong constant name #<Class:0x00000001150a5b48>
Warning: Treating libomp.rb as a formula.
==> Fetching libomp
==> Downloading https://mirrors.ustc.edu.cn/homebrew-bottles/libomp-11.1.0.big_sur.bottle.tar.gz
curl: (22) The requested URL returned error: 404
Warning: Bottle missing, falling back to the default domain...
==> Downloading https://ghcr.io/v2/homebrew/core/libomp/manifests/11.1.0
Already downloaded: /Users/leo/Library/Caches/Homebrew/downloads/d07c72aee9e97441cb0e3a5ea764f8acae989318597336359b6e111c3f4c44b1--libomp-11.1.0.bottle_manifest.json
==> Downloading https://ghcr.io/v2/homebrew/core/libomp/blobs/sha256:ec279162f0062c675ea96251801a99c19c3b82f395f1598ae2f31cd4cbd9a963
############################################################################################################################################################################## 100.0%
Warning: libomp 18.1.5 is available and more recent than version 11.1.0.
==> Pouring libomp--11.1.0.big_sur.bottle.tar.gz
🍺 /usr/local/Cellar/libomp/11.1.0: 9 files, 1.4MB
==> Running `brew cleanup libomp`...
Disable this behaviour by setting HOMEBREW_NO_INSTALL_CLEANUP.
Hide these hints with HOMEBREW_NO_ENV_HINTS (see `man brew`).
Removing: /Users/leo/Library/Caches/Homebrew/libomp--11.1.0... (467.3KB)
|
closed
|
2024-05-08T02:19:54Z
|
2024-06-27T09:26:02Z
|
https://github.com/autogluon/autogluon/issues/4181
|
[
"bug: unconfirmed",
"OS: Mac",
"Needs Triage"
] |
LeonTing1010
| 3
|
christabor/flask_jsondash
|
flask
| 134
|
Using the example app
|
I am sorry if this sounds stupid, but I am new to flask development. I am confused with how exactly to use the example app to study how the flask_jsondash work. I run the app and click the link that said 'Visit the chart blueprints' but it lead to server timeout error. When I looked at the code I understand that there is no function mapped to '/charts' URL, I guess you want people to implement that themselves but I can't figure out what I need to do. The doc seems to be built so people with experiences can understand quickly, but I can't. Do you have some guideline for people who are new to this? Specifically, I would appreciate a newbie tutorial. It does not have to be detailed, but a steps that I can follow to understand more easily.
|
closed
|
2017-07-19T02:50:23Z
|
2017-07-20T04:43:50Z
|
https://github.com/christabor/flask_jsondash/issues/134
|
[] |
FMFluke
| 7
|
davidteather/TikTok-Api
|
api
| 629
|
[FEATURE_REQUEST] - Accessing private tik toks
|
Unable to access private tik toks.
Running V.3.9.9 - have manually entered verifyFp cookie from logged in account with access to private content in question; working fine for public tik toks but for privates just runs and stops with no triggered error messages.
I know variations of this topic have been discussed in past, any functionality as of June 2021?
Also noticed class TikTokUser which seems relevant. Any help appreciated.
|
closed
|
2021-06-21T23:50:16Z
|
2021-08-07T00:13:59Z
|
https://github.com/davidteather/TikTok-Api/issues/629
|
[
"feature_request"
] |
alanblue2000
| 1
|
pydata/pandas-datareader
|
pandas
| 495
|
Release 0.7.0
|
The recent re-addition of Yahoo! will trigger a new release in the next week. I will try to get as many PRs in before then.
|
closed
|
2018-02-21T18:22:00Z
|
2018-09-12T11:32:24Z
|
https://github.com/pydata/pandas-datareader/issues/495
|
[] |
bashtage
| 15
|
kaliiiiiiiiii/Selenium-Driverless
|
web-scraping
| 300
|
is there a way to start it with firefox ?
|
SO my mian browser is firefox, but for selenium_driverless i use chrome.
so is there a way to use firefox instead ? thanks
|
closed
|
2024-12-26T12:55:21Z
|
2024-12-26T13:05:00Z
|
https://github.com/kaliiiiiiiiii/Selenium-Driverless/issues/300
|
[] |
firaki12345-cmd
| 1
|
ets-labs/python-dependency-injector
|
flask
| 814
|
Configuration object is a dictionary and not an object with which the dot notation could be used to access its elements
|
I cannot figure out why the config is a dictionary, while in every example it is an object with "." (dot) operator to access its variables.
Here is a simple code that I used:
```
from dependency_injector import containers, providers
from dependency_injector.wiring import Provide, inject
class Container(containers.DeclarativeContainer):
config = providers.Configuration()
@inject
def use_config(config: providers.Configuration = Provide[Container.config]):
print(type(config))
print(config)
if __name__ == "__main__":
container = Container()
container.config.from_yaml("config.yaml")
container.wire(modules=[__name__])
use_config()
```
and the type is a 'dict'
|
open
|
2024-09-02T12:55:35Z
|
2025-01-04T16:37:36Z
|
https://github.com/ets-labs/python-dependency-injector/issues/814
|
[] |
Zevrap-81
| 2
|
zappa/Zappa
|
django
| 520
|
[Migrated] Feature Request: Including vs. Excluding
|
Originally from: https://github.com/Miserlou/Zappa/issues/1366 by [rr326](https://github.com/rr326)
This may go against your fundamental design philosophy, but I feel it would be better to have Zappa require an explicit inclusion list rather than an exclusion list.
Background:
I've just spent a day trying to figure out why my Zappa installation stopped working. After following one cryptic lambda error after another, and reading many Zappa issues, I finally figured out that my package was > 500MB. (And it is a SIMPLE Flask app.) After more research, I figured out that it was including all sorts of stuff that it didn't need to. Then after a lot of trial and error, I excluded one dir after another, until my previously > 500MB .gz is now about 8 MB!
I'm not saying Zappa did anything "wrong". But it simply couldn't guess what should and shouldn't have been included in my project. (I'm happy to share a project structure and give details if you want, but I think my structure is reasonable.)
By including an "include" array, I know that it is my responsibility to modify it. And if you make it required and start with: `"include": ["."]`, I see it as a clue to tell me I need to figure it out.
Alternatively, much more prominent information on the importance of "exclude" in the documentation would help. I could take a stab at that if it would be helpful, though I don't really understand what is and isn't included in the first place.
|
closed
|
2021-02-20T09:43:51Z
|
2022-07-16T07:25:01Z
|
https://github.com/zappa/Zappa/issues/520
|
[
"enhancement",
"feature-request"
] |
jneves
| 2
|
chainer/chainer
|
numpy
| 8,623
|
can't use chainer with new cupy, can't install old cupy
|
Chainer officially only support cupy == 7.8.0. Sadly, I can't install older cupy and when installing newer cupy, cupy is not recognized by chainer.
Option 1:
`pip install Cython` # works and installs Cython 0.29.31
`pip install cupy==7.8.0` # fails
> CUDA_PATH : /cvmfs/soft.computecanada.ca/easybuild/software/2020/Core/cudacore/11.0.2
> NVTOOLSEXT_PATH : (none)
> NVCC : (none)
> ROCM_HOME : (none)
>
> Modules:
> cuda : Yes (version 11000)
> cusolver : Yes
> cudnn : Yes (version 8003)
> nccl : No
> -> Include files not found: ['nccl.h']
> -> Check your CFLAGS environment variable.
> nvtx : Yes
> thrust : Yes
> cutensor : No
> -> Include files not found: ['cutensor.h']
> -> Check your CFLAGS environment variable.
> cub : Yes
>
> WARNING: Some modules could not be configured.
> CuPy will be installed without these modules.
> Please refer to the Installation Guide for details:
> https://docs.cupy.dev/en/stable/install.html
>
> ************************************************************
>
> NOTICE: Skipping cythonize as cupy/core/_dtype.pyx does not exist.
> NOTICE: Skipping cythonize as cupy/core/_kernel.pyx does not exist.
> NOTICE: Skipping cythonize as cupy/core/_memory_range.pyx does not exist.
> NOTICE: Skipping cythonize as cupy/core/_routines_indexing.pyx does not exist.
> NOTICE: Skipping cythonize as cupy/core/_routines_logic.pyx does not exist.
> NOTICE: Skipping cythonize as cupy/core/_routines_manipulation.pyx does not exist.
> NOTICE: Skipping cythonize as cupy/core/_routines_math.pyx does not exist.
> NOTICE: Skipping cythonize as cupy/core/_routines_sorting.pyx does not exist.
> NOTICE: Skipping cythonize as cupy/core/_routines_statistics.pyx does not exist.
> NOTICE: Skipping cythonize as cupy/core/_scalar.pyx does not exist.
> NOTICE: Skipping cythonize as cupy/core/core.pyx does not exist.
> NOTICE: Skipping cythonize as cupy/core/dlpack.pyx does not exist.
> NOTICE: Skipping cythonize as cupy/core/flags.pyx does not exist.
> NOTICE: Skipping cythonize as cupy/core/internal.pyx does not exist.
> NOTICE: Skipping cythonize as cupy/core/fusion.pyx does not exist.
> NOTICE: Skipping cythonize as cupy/core/raw.pyx does not exist.
> NOTICE: Skipping cythonize as cupy/cuda/cublas.pyx does not exist.
> NOTICE: Skipping cythonize as cupy/cuda/cufft.pyx does not exist.
> NOTICE: Skipping cythonize as cupy/cuda/curand.pyx does not exist.
> NOTICE: Skipping cythonize as cupy/cuda/cusparse.pyx does not exist.
> NOTICE: Skipping cythonize as cupy/cuda/device.pyx does not exist.
> NOTICE: Skipping cythonize as cupy/cuda/driver.pyx does not exist.
> NOTICE: Skipping cythonize as cupy/cuda/memory.pyx does not exist.
> NOTICE: Skipping cythonize as cupy/cuda/memory_hook.pyx does not exist.
> NOTICE: Skipping cythonize as cupy/cuda/nvrtc.pyx does not exist.
> NOTICE: Skipping cythonize as cupy/cuda/pinned_memory.pyx does not exist.
> NOTICE: Skipping cythonize as cupy/cuda/profiler.pyx does not exist.
> NOTICE: Skipping cythonize as cupy/cuda/function.pyx does not exist.
> NOTICE: Skipping cythonize as cupy/cuda/stream.pyx does not exist.
> NOTICE: Skipping cythonize as cupy/cuda/runtime.pyx does not exist.
> NOTICE: Skipping cythonize as cupy/cuda/texture.pyx does not exist.
> NOTICE: Skipping cythonize as cupy/util.pyx does not exist.
> NOTICE: Skipping cythonize as cupy/cuda/cusolver.pyx does not exist.
> NOTICE: Skipping cythonize as cupy/cuda/cudnn.pyx does not exist.
> NOTICE: Skipping cythonize as cupy/cudnn.pyx does not exist.
> NOTICE: Skipping cythonize as cupy/cuda/nvtx.pyx does not exist.
> NOTICE: Skipping cythonize as cupy/cuda/thrust.pyx does not exist.
> NOTICE: Skipping cythonize as cupy/cuda/cub.pyx does not exist.
> Traceback (most recent call last):
> File "<string>", line 2, in <module>
> File "<pip-setuptools-caller>", line 34, in <module>
> File "/tmp/pip-install-3nvzj2jg/cupy_b0144fd6a6cf445885b7f609ca09f2cc/setup.py", line 156, in <module>
> setup(
> File "/home/jolicoea/vidgen2/lib/python3.8/site-packages/setuptools/__init__.py", line 87, in setup
> return distutils.core.setup(**attrs)
> File "/home/jolicoea/vidgen2/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 177, in setup
> return run_commands(dist)
> File "/home/jolicoea/vidgen2/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 193, in run_commands
> dist.run_commands()
> File "/home/jolicoea/vidgen2/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 968, in run_commands
> self.run_command(cmd)
> File "/home/jolicoea/vidgen2/lib/python3.8/site-packages/setuptools/dist.py", line 1217, in run_command
> super().run_command(command)
> File "/home/jolicoea/vidgen2/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 987, in run_command
> cmd_obj.run()
> File "/home/jolicoea/vidgen2/lib/python3.8/site-packages/setuptools/command/install.py", line 68, in run
> return orig.install.run(self)
> File "/home/jolicoea/vidgen2/lib/python3.8/site-packages/setuptools/_distutils/command/install.py", line 695, in run
> self.run_command('build')
> File "/home/jolicoea/vidgen2/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 317, in run_command
> self.distribution.run_command(command)
> File "/home/jolicoea/vidgen2/lib/python3.8/site-packages/setuptools/dist.py", line 1217, in run_command
> super().run_command(command)
> File "/home/jolicoea/vidgen2/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 987, in run_command
> cmd_obj.run()
> File "/home/jolicoea/vidgen2/lib/python3.8/site-packages/setuptools/command/build.py", line 24, in run
> super().run()
> File "/home/jolicoea/vidgen2/lib/python3.8/site-packages/setuptools/_distutils/command/build.py", line 131, in run
> self.run_command(cmd_name)
> File "/home/jolicoea/vidgen2/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 317, in run_command
> self.distribution.run_command(command)
> File "/home/jolicoea/vidgen2/lib/python3.8/site-packages/setuptools/dist.py", line 1217, in run_command
> super().run_command(command)
> File "/home/jolicoea/vidgen2/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 987, in run_command
> cmd_obj.run()
> File "/tmp/pip-install-3nvzj2jg/cupy_b0144fd6a6cf445885b7f609ca09f2cc/cupy_setup_build.py", line 991, in run
> check_extensions(self.extensions)
> File "/tmp/pip-install-3nvzj2jg/cupy_b0144fd6a6cf445885b7f609ca09f2cc/cupy_setup_build.py", line 728, in check_extensions
> raise RuntimeError('''\
> RuntimeError: Missing file: cupy/cuda/cub.cpp
> Please install Cython 0.28.0 or later. Please also check the version of Cython.
> See https://docs.cupy.dev/en/stable/install.html for details.
I have Cython 0.29.31, pip install says that 0.28.0 is not available so I can't downgrade.
Option 2:
`pip install cupy` or install from a git clone with `pip install -e .`
cupy installs properly and I can use it, but chainer does not think that cupy is properly installed and I get cupy.core not found errors
### To Reproduce
```py
pip install Cython
pip install cupy==7.8.0 # error
pip install cupy-cuda110 # error not found
pip install cupy # works
pip install chainer
python -c 'import chainer; chainer.print_runtime_info()' # cupy not recognized by chainer
```
### Environment
```
# python -c 'import chainer; chainer.print_runtime_info()'
Chainer: 4.5.0
NumPy: 1.23.0
CuPy: Not Available
# python -c 'import cupy; cupy.show_config()'
OS : Linux-3.10.0-1160.62.1.el7.x86_64-x86_64-with-glibc2.2.5
Python Version : 3.8.10
CuPy Version : 11.0.0
CuPy Platform : NVIDIA CUDA
NumPy Version : 1.23.0
SciPy Version : 1.8.0
Cython Build Version : 0.29.31
Cython Runtime Version : 0.29.31
CUDA Root : /cvmfs/soft.computecanada.ca/easybuild/software/2020/Core/cudacore/11.0.2
nvcc PATH : /cvmfs/soft.computecanada.ca/easybuild/software/2020/Core/cudacore/11.0.2/bin/nvcc
CUDA Build Version : 11040
CUDA Driver Version : 11060
CUDA Runtime Version : 11040
cuBLAS Version : (available)
cuFFT Version : 10502
cuRAND Version : 10205
cuSOLVER Version : (11, 2, 0)
cuSPARSE Version : (available)
NVRTC Version : (11, 4)
Thrust Version : 101201
CUB Build Version : 101201
Jitify Build Version : d90e2e0
cuDNN Build Version : 8200
cuDNN Version : 8200
NCCL Build Version : 21104
NCCL Runtime Version : 21104
cuTENSOR Version : None
cuSPARSELt Build Version : None
Device 0 Name : Tesla V100-SXM2-16GB
Device 0 Compute Capability : 70
Device 0 PCI Bus ID : 0000:1C:00.0
Device 1 Name : Tesla V100-SXM2-16GB
Device 1 Compute Capability : 70
Device 1 PCI Bus ID : 0000:1D:00.0
```
|
closed
|
2022-08-01T17:54:52Z
|
2022-08-02T04:51:48Z
|
https://github.com/chainer/chainer/issues/8623
|
[] |
AlexiaJM
| 1
|
ydataai/ydata-profiling
|
pandas
| 1,288
|
Missing tags for latest v3.6.x releases
|
On PyPI, there are 3.6.4, 3.6.5 and 3.6.6 releases (https://pypi.org/project/pandas-profiling/#history), but no corresponding tags in this repo. Can you add them please?
|
closed
|
2023-03-17T10:08:19Z
|
2023-04-13T11:37:54Z
|
https://github.com/ydataai/ydata-profiling/issues/1288
|
[
"question/discussion ❓"
] |
jamesmyatt
| 1
|
tensorly/tensorly
|
numpy
| 143
|
Feature Request: Outer Product
|
Hi,
I am starting to work with tensors and found your toolbox. When trying the CPD, I first wanted to create a rank-3 tensor and then decompose it again and look at the result, since CPD is unique. There I noticed that no outer vector product is implemented in the toolbox. It would be nice if there is one, for simulations etc.
I know there is numpy.outer, but this only works for 2 vectors.
Thanks in advance,
Isi
|
closed
|
2019-12-02T10:49:47Z
|
2019-12-02T12:43:58Z
|
https://github.com/tensorly/tensorly/issues/143
|
[] |
IsabellLehmann
| 2
|
alpacahq/alpaca-trade-api-python
|
rest-api
| 453
|
Paper Trade client does not support fractional trading
|
Not sure if this is a client or server issue but when trying the below code with a paper trading endpoint and API key
resp = api.submit_order(
symbol='SPY',
notional=450, # notional value of 1.5 shares of SPY at $300
side='buy',
type='market',
time_in_force='day',
)
I get alpaca_trade_api.rest.APIError: qty is required
|
closed
|
2021-06-19T21:32:40Z
|
2021-06-19T21:42:51Z
|
https://github.com/alpacahq/alpaca-trade-api-python/issues/453
|
[] |
arun-annamalai
| 0
|
neuml/txtai
|
nlp
| 465
|
How to add a custom pipeline into yaml config?
|
I have MyPipeline class.
How to add the pipeline?
Can we use this syntax?
```yaml
my_pipeline:
__import__: app.MyPipeline
```
|
closed
|
2023-04-25T11:07:44Z
|
2023-04-26T13:56:58Z
|
https://github.com/neuml/txtai/issues/465
|
[] |
pyoner
| 2
|
microsoft/nni
|
deep-learning
| 5,225
|
customized trial issue from wechat
|
**Describe the issue**:
您好,这个操作如何使用命令执行?
hi, could I use nni-command to do it?

**Environment**:
- NNI version: --
- Training service (local|remote|pai|aml|etc):
- Client OS:
- Server OS (for remote mode only):
- Python version:
- PyTorch/TensorFlow version:
- Is conda/virtualenv/venv used?:
- Is running in Docker?:
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**:
|
open
|
2022-11-14T01:58:20Z
|
2022-11-16T02:41:00Z
|
https://github.com/microsoft/nni/issues/5225
|
[
"feature request"
] |
Lijiaoa
| 0
|
ARM-DOE/pyart
|
data-visualization
| 1,297
|
ENH: Add keyword add_lines in method plot_grid in gridmapdisplay
|
On occasion you would like to have the grid lines but not the coast maps in the plot. Separating the two would be useful (see https://github.com/MeteoSwiss/pyart/blob/dev/pyart/graph/gridmapdisplay.py)
|
closed
|
2022-10-21T11:10:25Z
|
2022-11-03T15:57:17Z
|
https://github.com/ARM-DOE/pyart/issues/1297
|
[
"Enhancement",
"good first issue"
] |
jfigui
| 3
|
alpacahq/alpaca-trade-api-python
|
rest-api
| 409
|
Invalid TimeFrame in getbars for hourly
|
Hello when I try to use the example, I get an error:
`api.get_bars("AAPL", TimeFrame.Hour, "2021-02-08", "2021-02-08", limit=10, adjustment='raw').df`
```
"""
Traceback (most recent call last):
File "/home/kyle/.virtualenv/lib/python3.8/site-packages/alpaca_trade_api/rest.py", line 160, in _one_request
resp.raise_for_status()
File "/home/kyle/.virtualenv/lib/python3.8/site-packages/requests/models.py", line 943, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 422 Client Error: Unprocessable Entity for url: https://data.alpaca.markets/v2/stocks/AAPL/bars?timeframe=1Hour&adjustment=raw&start=2021-02-08&end=2021-02-08&limit=10
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "run_hourly.py", line 231, in make_df
tdf = api.get_bars("AAPL", TimeFrame.Hour, "2021-02-08", "2021-02-08", limit=10, adjustment='raw').df
File "/home/kyle/.virtualenv/lib/python3.8/site-packages/alpaca_trade_api/rest.py", line 625, in get_bars
bars = list(self.get_bars_iter(symbol,
File "/home/kyle/.virtualenv/lib/python3.8/site-packages/alpaca_trade_api/rest.py", line 611, in get_bars_iter
for bar in bars:
File "/home/kyle/.virtualenv/lib/python3.8/site-packages/alpaca_trade_api/rest.py", line 535, in _data_get_v2
resp = self.data_get('/stocks/{}/{}'.format(symbol, endpoint),
File "/home/kyle/.virtualenv/lib/python3.8/site-packages/alpaca_trade_api/rest.py", line 192, in data_get
return self._request(
File "/home/kyle/.virtualenv/lib/python3.8/site-packages/alpaca_trade_api/rest.py", line 139, in _request
return self._one_request(method, url, opts, retry)
File "/home/kyle/.virtualenv/lib/python3.8/site-packages/alpaca_trade_api/rest.py", line 168, in _one_request
raise APIError(error, http_error)
alpaca_trade_api.rest.APIError: invalid timeframe
```
Minute data seems to work.
|
closed
|
2021-03-30T15:22:37Z
|
2021-07-02T09:35:55Z
|
https://github.com/alpacahq/alpaca-trade-api-python/issues/409
|
[] |
hansonkd
| 7
|
flaskbb/flaskbb
|
flask
| 24
|
Add tests
|
I think tests are needed,
I myself am I big fan of [py.test](http://pytest.org/latest/) in combination with [pytest-bdd](https://github.com/olegpidsadnyi/pytest-bdd) for functional tests.
|
closed
|
2014-03-08T11:14:40Z
|
2018-04-15T07:47:30Z
|
https://github.com/flaskbb/flaskbb/issues/24
|
[] |
hvdklauw
| 15
|
LAION-AI/Open-Assistant
|
python
| 3,293
|
user streak updates in the backend takes up 100% cpu
|
updating user streak seems to use the cpu 100% in db.commit().
Add end of task debug info
Batch the commits
|
closed
|
2023-06-05T00:16:51Z
|
2023-06-07T13:03:34Z
|
https://github.com/LAION-AI/Open-Assistant/issues/3293
|
[
"bug",
"backend"
] |
melvinebenezer
| 0
|
microsoft/nni
|
pytorch
| 4,874
|
Error about running with framecontroller mode
|
**config.yml**
experimentName: example_mnist_pytorch
trialConcurrency: 1
maxExecDuration: 1h
maxTrialNum: 2
debug: false
nniManagerIp: 172.16.40.155
#choice: local, remote, pai, kubeflow
trainingServicePlatform: frameworkcontroller
searchSpacePath: search_space.json
#choice: true, false
useAnnotation: false
tuner:
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner, GPTuner
builtinTunerName: TPE
classArgs:
#choice: maximize, minimize
optimize_mode: maximize
trial:
codeDir: .
taskRoles:
- name: worker
taskNum: 1
command: python3 model.py
gpuNum: 0
cpuNum: 1
memoryMB: 8192
image: frameworkcontroller/nni:v1.0
securityContext:
privileged: true
frameworkAttemptCompletionPolicy:
minFailedTaskCount: 1
minSucceededTaskCount: 1
frameworkcontrollerConfig:
storage: nfs
serviceAccountName: frameworkcontroller
nfs:
# Your NFS server IP, like 10.10.10.10
server: <my nfs server ip>
# Your NFS server export path, like /var/nfs/nni
path: /nfs/nni
readOnly: false
nfs server: /etc/exports
/nfs/nni *(rw,no_root_squash,sync,insecure,no_subtree_check)
**k8s logs**
kubectl describe pod nniexpywn5cftxenvma3hh-worker-0:

How to solve this error?
|
open
|
2022-05-19T08:02:48Z
|
2022-06-28T09:39:36Z
|
https://github.com/microsoft/nni/issues/4874
|
[
"user raised",
"support",
"Training Service"
] |
N-Kingsley
| 1
|
ydataai/ydata-profiling
|
data-science
| 1,134
|
Introduce type-checking
|
### Missing functionality
Types are not enforced by Python, nevertheless, to introduce type checking would reduce the number of issues open.
### Proposed feature
Introduce type-checking for the user interfaced methods.
### Alternatives considered
_No response_
### Additional context
_No response_
|
closed
|
2022-10-31T20:40:46Z
|
2022-11-10T11:08:04Z
|
https://github.com/ydataai/ydata-profiling/issues/1134
|
[
"needs-triage"
] |
fabclmnt
| 0
|
davidsandberg/facenet
|
tensorflow
| 292
|
the result question
|
closed
|
2017-05-26T07:41:04Z
|
2017-05-26T07:42:55Z
|
https://github.com/davidsandberg/facenet/issues/292
|
[] |
hanbim520
| 0
|
|
openapi-generators/openapi-python-client
|
rest-api
| 1,002
|
Providing package version via command properties
|
Allow setting generated client version by providing `--package-version` like property in the `openapi-python-client generate`
This will simplify automatic processes in the Git-like environments
|
open
|
2024-03-14T10:09:16Z
|
2024-03-14T10:09:16Z
|
https://github.com/openapi-generators/openapi-python-client/issues/1002
|
[] |
pasolid
| 0
|
pywinauto/pywinauto
|
automation
| 983
|
No Conda Installation
|
## Expected Behavior
I am trying to install pywinauto thought conda. Here is the documentation:
https://anaconda.org/conda-forge/pywinauto
## Actual Behavior
PackagesNotFoundError: The following packages are not available from current channels:
- pywinauto
## Steps to Reproduce the Problem
1. Open any Conda Environment
2. Write:
2.1. conda install -c conda-forge pywinauto or
2.2. conda install -c conda-forge/label/cf201901 pywinauto or
2.3. conda install -c conda-forge/label/cf202003 pywinauto
## Specifications
- Pywinauto version:
- Python version and bitness: Python 3.7
- Platform and OS: MacOS Sierra v10.12.6
|
closed
|
2020-09-15T14:25:23Z
|
2020-09-15T19:09:25Z
|
https://github.com/pywinauto/pywinauto/issues/983
|
[
"duplicate"
] |
CibelesR
| 3
|
rougier/scientific-visualization-book
|
matplotlib
| 33
|
Requirements files for package versions
|
Having the versions of packages used for the book in a `requirements.txt` would be helpful for others reproducing things a year or so down the line.
Looking through the repo for something such as `numpy` doesn't bring up anything wrt the version used, just that it's imported in several locations.
As this book was only recently finished, perhaps there's an environment on the authors computer which has all the correct package versions within it? And a file could be generated from that.
Example search for numpy with `rg numpy` :
```
README.md:* [From Python to Numpy](https://www.labri.fr/perso/nrougier/from-python-to-numpy/) (Scientific Python Volume I)
README.md:* [100 Numpy exercices](https://github.com/rougier/numpy-100)
rst/threed.rst:specify the color of each of the triangle using a numpy array, so let's just do
rst/anatomy.rst: import numpy as np
rst/anatomy.rst: import numpy as np
rst/00-preface.rst: import numpy as np
rst/00-preface.rst: >>> import numpy; print(numpy.__version__)
code/showcases/escher-movie.py:import numpy as np
code/showcases/domain-coloring.py:import numpy as np
code/showcases/mandelbrot.py:import numpy as np
code/showcases/text-shadow.py:import numpy as np
code/showcases/windmap.py:import numpy as np
code/showcases/mosaic.py:import numpy as np
code/showcases/mosaic.py: from numpy.random.mtrand import RandomState
code/showcases/contour-dropshadow.py:import numpy as np
code/showcases/escher.py:import numpy as np
code/showcases/text-spiral.py:import numpy as np
code/showcases/recursive-voronoi.py:import numpy as np
code/showcases/recursive-voronoi.py: from numpy.random.mtrand import RandomState
code/showcases/waterfall-3d.py:import numpy as np
rst/defaults.rst: import numpy as np
code/beyond/stamp.py:import numpy as np
code/beyond/dyson-hatching.py:import numpy as np
code/beyond/tikz-dashes.py:import numpy as np
rst/coordinates.rst: import numpy as np
rst/animation.rst: import numpy as np
rst/animation.rst: import numpy as np
rst/animation.rst: import numpy as np
code/beyond/tinybot.py:import numpy as np
code/beyond/dungeon.py:import numpy as np
cover/cover-pattern.py:import numpy as np
code/beyond/interactive-loupe.py:import numpy as np
code/beyond/bluenoise.py:import numpy as np
code/beyond/bluenoise.py: from numpy.random.mtrand import RandomState
code/beyond/polygon-clipping.py:import numpy as np
code/beyond/radial-maze.py:import numpy as np
code/reference/hatch.py:import numpy as np
code/reference/colormap-sequential-1.py:import numpy as np
code/reference/colormap-uniform.py:import numpy as np
code/reference/marker.py:import numpy as np
code/reference/line.py:import numpy as np
code/ornaments/annotation-zoom.py:import numpy as np
code/scales-projections/text-polar.py:import numpy as np
code/ornaments/legend-regular.py:import numpy as np
code/ornaments/elegant-scatter.py:import numpy as np
code/reference/axes-adjustment.py:import numpy as np
code/ornaments/legend-alternatives.py:import numpy as np
code/reference/colorspec.py:import numpy as np
code/typography/typography-matters.py:import numpy as np
code/colors/color-gradients.py:import numpy as np
code/threed/bunny-6.py:import numpy as np
code/scales-projections/projection-polar-config.py:import numpy as np
code/colors/color-wheel.py:import numpy as np
code/colors/mona-lisa.py:import numpy as np
code/threed/bunny-4.py:import numpy as np
code/ornaments/title-regular.py:import numpy as np
code/typography/text-outline.py:import numpy as np
code/ornaments/bessel-functions.py:import numpy as np
code/scales-projections/projection-polar-histogram.py:import numpy as np
code/rules/rule-6.py:import numpy as np
code/colors/open-colors.py:import numpy as np
code/colors/material-colors.py:import numpy as np
code/colors/flower-polar.py:import numpy as np
code/typography/typography-legibility.py:import numpy as np
code/colors/colored-plot.py:import numpy as np
code/rules/rule-7.py:import numpy as np
code/colors/colored-hist.py:import numpy as np
code/rules/rule-3.py:import numpy as np
code/colors/alpha-vs-color.py:import numpy as np
code/colors/stacked-plots.py:import numpy as np
code/ornaments/label-alternatives.py:import numpy as np
code/typography/tick-labels-variation.py:import numpy as np
code/rules/projections.py:import numpy as np
code/typography/typography-math-stacks.py:import numpy as np
code/rules/rule-2.py:import numpy as np
code/colors/alpha-scatter.py:import numpy as np
code/typography/typography-text-path.py:import numpy as np
code/scales-projections/polar-patterns.py:import numpy as np
code/scales-projections/scales-custom.py:import numpy as np
code/rules/parameters.py:import numpy as np
code/threed/bunny-1.py:import numpy as np
code/scales-projections/geo-projections.py:import numpy as np
code/rules/rule-8.py:import numpy as np
code/threed/bunny-5.py:import numpy as np
code/ornaments/latex-text-box.py:import numpy as np
code/ornaments/annotation-direct.py:import numpy as np
code/typography/projection-3d-gaussian.py:import numpy as np
code/threed/bunny-7.py:import numpy as np
code/rules/helper.py:import numpy as np
code/rules/helper.py: """ Generate a numpy array containing a disc. """
code/typography/typography-font-stacks.py:import numpy as np
code/scales-projections/scales-log-log.py:import numpy as np
code/scales-projections/projection-3d-frame.py:import numpy as np
code/threed/bunnies.py:import numpy as np
code/threed/bunny.py:import numpy as np
code/threed/bunny-3.py:import numpy as np
code/rules/rule-9.py:import numpy as np
code/ornaments/annotate-regular.py:import numpy as np
code/threed/bunny-2.py:import numpy as np
code/reference/tick-locator.py:import numpy as np
code/threed/bunny-8.py:import numpy as np
code/reference/colormap-diverging.py:import numpy as np
code/scales-projections/scales-comparison.py:import numpy as np
code/reference/text-alignment.py:import numpy as np
code/scales-projections/scales-check.py:import numpy as np
code/reference/scale.py:import numpy as np
code/reference/colormap-qualitative.py:import numpy as np
code/reference/tick-formatter.py:import numpy as np
code/rules/graphics.py:import numpy as np
code/rules/rule-1.py:import numpy as np
code/reference/collection.py:import numpy as np
code/ornaments/annotation-side.py:import numpy as np
code/reference/colormap-sequential-2.py:import numpy as np
code/defaults/defaults-exercice-1.py:import numpy as np
code/animation/platecarree.py:import numpy as np
code/defaults/defaults-step-4.py:import numpy as np
code/layout/standard-layout-2.py:import numpy as np
code/coordinates/transforms-blend.py:import numpy as np
code/unsorted/layout-weird.py:import numpy as np
code/unsorted/advanced-linestyles.py:import numpy as np
code/coordinates/transforms-exercise-1.py:import numpy as np
code/animation/fluid-animation.py:import numpy as np
code/unsorted/alpha-gradient.py:import numpy as np
code/animation/sine-cosine.py:import numpy as np
code/optimization/line-benchmark.py:import numpy as np
code/animation/fluid.py:import numpy as np
code/optimization/transparency.py:import numpy as np
code/animation/rain.py:import numpy as np
code/optimization/scatter-benchmark.py:import numpy as np
code/optimization/self-cover.py:import numpy as np
code/animation/imgcat.py:import numpy as np
code/unsorted/hatched-bars.py:import numpy as np
code/optimization/multithread.py:import numpy as np
code/animation/earthquakes.py:import numpy as np
code/unsorted/poster-layout.py:import numpy as np
code/coordinates/transforms-letter.py:import numpy as np
code/optimization/scatters.py:import numpy as np
code/animation/sine-cosine-mp4.py:import numpy as np
code/optimization/multisample.py:import numpy as np
code/unsorted/make-hatch-linewidth.py:import numpy as np
code/animation/lissajous.py:import numpy as np
code/coordinates/transforms-polar.py:import numpy as np
code/unsorted/earthquakes.py:import numpy as np
code/unsorted/git-commits.py:import numpy as np
code/unsorted/github-activity.py:import numpy as np
code/coordinates/collage.py:import numpy as np
code/introduction/matplotlib-timeline.py:import numpy as np
code/coordinates/transforms-floating-axis.py:import numpy as np
code/coordinates/transforms-hist.py:import numpy as np
code/layout/complex-layout-bare.py:import numpy as np
code/animation/less-is-more.py:import numpy as np
code/coordinates/transforms.py:import numpy as np
code/unsorted/alpha-compositing.py:import numpy as np
code/layout/standard-layout-1.py:import numpy as np
code/layout/layout-classical.py:import numpy as np
code/unsorted/stacked-bars.py:import numpy as np
code/layout/complex-layout.py:import numpy as np
code/layout/layout-aspect.py:import numpy as np
code/unsorted/metropolis.py:import numpy as np
code/unsorted/scale-logit.py:import numpy as np
code/unsorted/dyson-hatching.py:import numpy as np
code/unsorted/dyson-hatching.py: from numpy.random.mtrand import RandomState
code/layout/layout-gridspec.py:import numpy as np
code/defaults/defaults-step-1.py:import numpy as np
code/defaults/defaults-step-2.py:import numpy as np
code/defaults/defaults-step-5.py:import numpy as np
code/defaults/defaults-step-3.py:import numpy as np
code/anatomy/bold-ticklabel.py:import numpy as np
code/unsorted/3d/contour.py:import numpy as np
code/unsorted/3d/sphere.py:import numpy as np
code/unsorted/3d/platonic-solids.py:import numpy as np
code/unsorted/3d/surf.py:import numpy as np
code/unsorted/3d/bar.py:import numpy as np
tex/cover-pattern.py:import numpy as np
code/unsorted/3d/scatter.py:import numpy as np
tex/book.bib: url = {https://www.labri.fr/perso/nrougier/from-python-to-numpy/},
code/anatomy/pixel-font.py:import numpy as np
code/anatomy/raster-vector.py:import numpy as np
code/anatomy/zorder-plots.py:import numpy as np
code/anatomy/ruler.py:import numpy as np
code/anatomy/anatomy.py:import numpy as np
code/anatomy/zorder.py:import numpy as np
code/unsorted/3d/bunny.py:import numpy as np
code/unsorted/3d/bunnies.py:import numpy as np
code/unsorted/3d/plot.py:import numpy as np
code/unsorted/3d/plot.py: camera : 4x4 numpy array
code/unsorted/3d/glm.py:import numpy as np
```
These seem to be the imports used in the text:
```
'from __future__ import absolute_import',
'from __future__ import division',
'from __future__ import print_function',
'from __future__ import unicode_literals',
'from datetime import date, datetime',
'from datetime import datetime',
'from dateutil.relativedelta import relativedelta',
'from docutils import nodes',
'from docutils.core import publish_cmdline',
'from docutils.parsers.rst import directives, Directive',
'from fluid import Fluid, inflow',
'from functools import reduce',
'from graphics import *',
'from helper import *',
'from itertools import cycle',
'from math import cos, sin, floor, sqrt, pi, ceil',
'from math import factorial',
'from math import sqrt, ceil, floor, pi, cos, sin',
'from matplotlib import colors',
'from matplotlib import ticker',
'from matplotlib.animation import FuncAnimation, writers',
'from matplotlib.artist import Artist',
'from matplotlib.backend_bases import GraphicsContextBase, RendererBase',
'from matplotlib.backends.backend_agg import FigureCanvas',
'from matplotlib.backends.backend_agg import FigureCanvasAgg',
'from matplotlib.collections import AsteriskPolygonCollection',
'from matplotlib.collections import CircleCollection',
'from matplotlib.collections import EllipseCollection',
'from matplotlib.collections import LineCollection',
'from matplotlib.collections import PatchCollection',
'from matplotlib.collections import PathCollection',
'from matplotlib.collections import PolyCollection',
'from matplotlib.collections import PolyCollection',
'from matplotlib.collections import QuadMesh',
'from matplotlib.collections import RegularPolyCollection',
'from matplotlib.collections import StarPolygonCollection',
'from matplotlib.colors import LightSource',
'from matplotlib.figure import Figure',
'from matplotlib.font_manager import FontProperties',
'from matplotlib.font_manager import findfont, FontProperties',
'from matplotlib.gridspec import GridSpec',
'from matplotlib.gridspec import GridSpec',
'from matplotlib.patches import Circle',
'from matplotlib.patches import Circle',
'from matplotlib.patches import Circle, Rectangle',
'from matplotlib.patches import ConnectionPatch',
'from matplotlib.patches import Ellipse',
'from matplotlib.patches import Ellipse',
'from matplotlib.patches import FancyBboxPatch',
'from matplotlib.patches import PathPatch',
'from matplotlib.patches import Polygon',
'from matplotlib.patches import Polygon',
'from matplotlib.patches import Polygon, Ellipse',
'from matplotlib.patches import Rectangle',
'from matplotlib.patches import Rectangle, PathPatch',
'from matplotlib.path import Path',
'from matplotlib.patheffects import Stroke, Normal',
'from matplotlib.patheffects import withStroke',
'from matplotlib.text import TextPath',
'from matplotlib.textpath import TextPath',
'from matplotlib.ticker import AutoMinorLocator, MultipleLocator, FuncFormatter',
'from matplotlib.ticker import MultipleLocator',
'from matplotlib.ticker import NullFormatter',
'from matplotlib.ticker import NullFormatter, MultipleLocator',
'from matplotlib.ticker import NullFormatter, SymmetricalLogLocator',
'from matplotlib.transforms import Affine2D',
'from matplotlib.transforms import ScaledTranslation',
'from matplotlib.transforms import blended_transform_factory, ScaledTranslation',
'from mpl_toolkits.axes_grid1 import ImageGrid',
'from mpl_toolkits.axes_grid1 import ImageGrid',
'from mpl_toolkits.axes_grid1 import make_axes_locatable',
'from mpl_toolkits.axes_grid1.inset_locator import inset_axes',
'from mpl_toolkits.axes_grid1.inset_locator import mark_inset',
'from mpl_toolkits.axes_grid1.inset_locator import mark_inset',
'from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes',
'from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes',
'from mpl_toolkits.mplot3d import Axes3D, art3d',
'from mpl_toolkits.mplot3d import Axes3D, proj3d, art3d',
'from multiprocessing import Pool',
'from numpy.random.mtrand import RandomState',
'from parameters import *',
'from pathlib import Path',
'from projections import *',
'from pylab import *',
'from scipy.ndimage import gaussian_filter',
'from scipy.ndimage import gaussian_filter1d',
'from scipy.ndimage import map_coordinates, spline_filter',
'from scipy.sparse.linalg import factorized',
'from scipy.spatial import Voronoi',
'from scipy.special import erf',
'from scipy.special import jn, jn_zeros',
'from shapely.geometry import Polygon',
'from shapely.geometry import box, Polygon',
'from skimage.color import rgb2lab, lab2rgb, rgb2xyz, xyz2rgb',
'from timeit import default_timer as timer',
'from tqdm.autonotebook import tqdm',
'import bluenoise',
'import cartopy',
'import cartopy.crs',
'import cartopy.crs as ccrs',
'import colorsys',
'import dateutil.parser',
'import git',
'import glm',
'import html.parser',
'import imageio',
'import locale',
'import math',
'import matplotlib',
'import matplotlib as mpl',
'import matplotlib.animation as animation',
'import matplotlib.animation as animation',
'import matplotlib.colors as colors',
'import matplotlib.colors as mc',
'import matplotlib.colors as mcolors',
'import matplotlib.gridspec as gridspec',
'import matplotlib.image as mpimg',
'import matplotlib.patches as mpatch',
'import matplotlib.patches as mpatches',
'import matplotlib.patches as patches',
'import matplotlib.path as mpath',
'import matplotlib.path as path',
'import matplotlib.patheffects as PathEffects',
'import matplotlib.patheffects as path_effects',
'import matplotlib.pylab as plt',
'import matplotlib.pyplot as plt',
'import matplotlib.pyplot as plt',
'import matplotlib.ticker as ticker',
'import matplotlib.transforms as transforms',
'import mpl_toolkits.axisartist.floating_axes as floating_axes',
'import mpl_toolkits.mplot3d.art3d as art3d',
'import mpmath',
'import noise',
'import numpy as np',
'import os',
'import plot',
'import re',
'import scipy',
'import scipy.sparse as sp',
'import scipy.spatial',
'import shapely.geometry',
'import sys',
'import tqdm',
'import tqdm',
'import types',
'import urllib',
'import urllib.request'
```
|
open
|
2021-12-11T12:57:53Z
|
2021-12-13T11:26:24Z
|
https://github.com/rougier/scientific-visualization-book/issues/33
|
[] |
geo7
| 0
|
SYSTRAN/faster-whisper
|
deep-learning
| 869
|
Transcribe results being translated to different language
|
Hi all,
I just want to post an issue I encountered. As the title suggests, faster-whisper transcribed the audio file in the wrong language.
This is my code for this test:
```
from faster_whisper import WhisperModel
model_size = "medium"
file = "audio.mp3"
model = WhisperModel(model_size, device="cpu", compute_type="int8")
segments, info = model.transcribe(
file,
initial_prompt="Umm, hmm, Uhh, Ahh", #used to detect fillers
temperature=0,
vad_filter=True,
without_timestamps=True,
beam_size=1,
# chunk_length=3
)
print("Detected language '%s' with probability %f" % (info.language, info.language_probability))
print(info)
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
And the result was:
> [1.01s -> 8.40s] Tidak ada, biar saya tolong awak dengan masalah pembayaran. Boleh saya tahu masalah apa yang awak hadapi?
I used google translate for the result and saw that the Malay sentence is close to what was said in the English audio, I'm not sure why it was translated. I checked the all_language_probs and saw that the highest prob was "ms". Given this, is there a way to prevent this? I know that setting a language helps, but the plan is to make it flexible and be able to transcribe audios of differing languages.
[audio.zip](https://github.com/user-attachments/files/15551003/audio.zip)
|
closed
|
2024-06-04T12:08:45Z
|
2024-06-17T07:04:37Z
|
https://github.com/SYSTRAN/faster-whisper/issues/869
|
[] |
acn-reginald-casela
| 11
|
plotly/dash
|
dash
| 2,764
|
Dangerous link detected error after upgrading to Dash 2.15.0
|
```
dash 2.15.0
dash-bootstrap-components 1.5.0
dash-core-components 2.0.0
dash-extensions 1.0.12
dash-html-components 2.0.0
dash-iconify 0.1.2
dash-mantine-components 0.12.1
dash-table 5.0.0
```
After upgrading to Dash 2.15.0, I have apps that are now breaking with `Dangerous link detected` errors, which are emitted when I use `Iframe` components to display embedded data, e.g. PDF files. Here is a small example,
```
import base64
import requests
from dash import Dash, html
# Get a sample PDF file.
r = requests.get("https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf")
bts = r.content
# Encode PDF file as base64 string.
encoded_string = base64.b64encode(bts).decode("ascii")
src = f"data:application/pdf;base64,{encoded_string}"
# Make a small example app.
app = Dash()
app.layout = html.Iframe(id="embedded-pdf", src=src, width="100%", height="100%")
if __name__ == '__main__':
app.run_server()
```
I would expect the app would continue to work, displaying the PDF, like it did in previous versions.
I guess the issue is related to the fixing of XSS vulnerabilities as mentioned in #2743 . However, I am not sure why it should be considered a vulnerability to display an embedded PDF file.
|
closed
|
2024-02-18T13:47:23Z
|
2024-04-22T15:43:09Z
|
https://github.com/plotly/dash/issues/2764
|
[
"bug",
"sev-1"
] |
emilhe
| 11
|
chatopera/Synonyms
|
nlp
| 25
|
请问words.nearby.json.gz这个词典的数据是怎么生成的啊,万分感谢
|
# description
## current
## expected
# solution
# environment
* version:
The commit hash (`git rev-parse HEAD`)
|
closed
|
2018-01-16T00:40:04Z
|
2018-01-16T06:22:40Z
|
https://github.com/chatopera/Synonyms/issues/25
|
[
"duplicate"
] |
waterzxj
| 3
|
sanic-org/sanic
|
asyncio
| 2,464
|
`Async for` can be used to iterate over a websocket's incoming messages
|
**Is your feature request related to a problem? Please describe.**
When creating a websocket server I'd like to use `async for` to iterate over the incoming messages from a connection. This is a feature that the [websockets lib](https://websockets.readthedocs.io/en/stable/) uses. Currently, if you try to use `async for` you get the following error:
```console
TypeError: 'async for' requires an object with __aiter__ method, got WebsocketImplProtocol
```
**Describe the solution you'd like**
Ideally, I could run something like the following on a Sanic websocket route:
```python
@app.websocket("/")
async def feed(request, ws):
async for msg in ws:
print(f'received: {msg.data}')
await ws.send(msg.data)
```
**Additional context**
[This was originally discussed on the sanic-support channel on the discord server](https://discord.com/channels/812221182594121728/813454547585990678/978393931903545344)
|
closed
|
2022-05-23T21:05:24Z
|
2022-09-20T21:20:33Z
|
https://github.com/sanic-org/sanic/issues/2464
|
[
"help wanted",
"intermediate",
"feature request"
] |
bradlangel
| 11
|
PaddlePaddle/PaddleHub
|
nlp
| 1,410
|
hub serving启动成功,访问返回503
|
用hub serving启动了我fintune后的模型,但在访问时返回503
|
open
|
2021-05-12T03:27:55Z
|
2021-05-14T02:21:05Z
|
https://github.com/PaddlePaddle/PaddleHub/issues/1410
|
[
"serving"
] |
dedex1994
| 3
|
TheKevJames/coveralls-python
|
pytest
| 4
|
Documentation could be clearer
|
The README says
> First, log in via Github and add your repo on Coveralls website.
>
> Second, install this package:
>
> $ pip install coveralls
>
> If you're using Travis CI, no further configuration is required.
Which raises a question: How can a package I pip install locally into my development laptop affect anything that happens between Travis CI and Coveralls.io?
Turns out some further configuration _is_ required after all. Specifically, I have to edit .travis.yml:
- I have to add `pip install coveralls` to the `install` section (or update my requirements.txt if I have one).
- I have to compute the coverage (using some variation of `coverage run testscript.py`),
- and then I have to push it to coveralls.io (by calling `coveralls` in the after_script section -- or should it be after_success? I don't know, I've seen both!).
It would be really helpful to have a sample .travis.yml in the README.
|
closed
|
2013-04-09T19:10:07Z
|
2013-04-10T16:57:21Z
|
https://github.com/TheKevJames/coveralls-python/issues/4
|
[] |
mgedmin
| 6
|
pallets-eco/flask-sqlalchemy
|
flask
| 1,145
|
flask-sqlAlchemy >=3.0.0 version: session.bind is None
|
<!--
This issue tracker is a tool to address bugs in Flask-SQLAlchemy itself.
Please use Pallets Discord or Stack Overflow for questions about your
own code.
Ensure your issue is with Flask-SQLAlchemy and not SQLAlchemy itself.
Replace this comment with a clear outline of what the bug is.
-->
<!--
Describe how to replicate the bug.
Include a minimal reproducible example that demonstrates the bug.
Include the full traceback if there was an exception.
-->
```
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
import pymysql
db = SQLAlchemy()
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = ''
pymysql.install_as_MySQLdb()
db.init_app(app)
class Task(db.Model):
""" task info table """
__tablename__ = 'test_client_task'
__table_args__ = {'extend_existing': True}
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(32), nullable=False)
with app.app_context():
print(db.session.bind)
# Task.__table__.drop(db.session.bind)
Task.__table__.create(db.session.bind)
```
<!--
Describe the expected behavior that should have happened but didn't.
-->
Expected: db.session.bind is Engine, but happened db.session.bind is None.
When a table is created in this way, an exception is raised.
But I found it to be normal with version 2.5.1
Environment:
- Python version: 3.9
- Flask-SQLAlchemy version: >= 3.0
- SQLAlchemy version: 1.4.45
|
closed
|
2022-12-13T07:23:11Z
|
2023-01-31T14:25:21Z
|
https://github.com/pallets-eco/flask-sqlalchemy/issues/1145
|
[] |
wangtao2213405054
| 1
|
TencentARC/GFPGAN
|
pytorch
| 496
|
Add atile
|
open
|
2024-01-27T07:15:33Z
|
2024-01-27T07:15:33Z
|
https://github.com/TencentARC/GFPGAN/issues/496
|
[] |
dildarhossin
| 0
|
|
mwaskom/seaborn
|
data-visualization
| 3,292
|
Wrong legend color when using histplot multiple times
|
```
sns.histplot([1,2,3])
sns.histplot([4,5,6])
sns.histplot([7,8,9])
sns.histplot([10,11,12])
plt.legend(labels=["A", "B", "C", "D"])
```

Seaborn 0.12.2, matplotlib 3.6.2
This may be related to https://github.com/mwaskom/seaborn/issues/3115 but is not the same issue, since histplot is used multiple times
|
closed
|
2023-03-10T16:04:15Z
|
2023-03-10T19:10:43Z
|
https://github.com/mwaskom/seaborn/issues/3292
|
[] |
mesvam
| 1
|
huggingface/datasets
|
numpy
| 7,371
|
500 Server error with pushing a dataset
|
### Describe the bug
Suddenly, I started getting this error message saying it was an internal error.
`Error creating/pushing dataset: 500 Server Error: Internal Server Error for url: https://huggingface.co/api/datasets/ll4ma-lab/grasp-dataset/commit/main (Request ID: Root=1-6787f0b7-66d5bd45413e481c4c2fb22d;670d04ff-65f5-4741-a353-2eacc47a3928)
Internal Error - We're working hard to fix this as soon as possible!
Traceback (most recent call last):
File "/uufs/chpc.utah.edu/common/home/hermans-group1/martin/software/pkg/miniforge3/envs/myenv2/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 406, in hf_raise_for_status
response.raise_for_status()
File "/uufs/chpc.utah.edu/common/home/hermans-group1/martin/software/pkg/miniforge3/envs/myenv2/lib/python3.10/site-packages/requests/models.py", line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://huggingface.co/api/datasets/ll4ma-lab/grasp-dataset/commit/main
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/uufs/chpc.utah.edu/common/home/u1295595/grasp_dataset_converter/src/grasp_dataset_converter/main.py", line 142, in main
subset_train.push_to_hub(dataset_name, split='train')
File "/uufs/chpc.utah.edu/common/home/hermans-group1/martin/software/pkg/miniforge3/envs/myenv2/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 5624, in push_to_hub
commit_info = api.create_commit(
File "/uufs/chpc.utah.edu/common/home/hermans-group1/martin/software/pkg/miniforge3/envs/myenv2/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/uufs/chpc.utah.edu/common/home/hermans-group1/martin/software/pkg/miniforge3/envs/myenv2/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 1518, in _inner
return fn(self, *args, **kwargs)
File "/uufs/chpc.utah.edu/common/home/hermans-group1/martin/software/pkg/miniforge3/envs/myenv2/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 4087, in create_commit
hf_raise_for_status(commit_resp, endpoint_name="commit")
File "/uufs/chpc.utah.edu/common/home/hermans-group1/martin/software/pkg/miniforge3/envs/myenv2/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 477, in hf_raise_for_status
raise _format(HfHubHTTPError, str(e), response) from e
huggingface_hub.errors.HfHubHTTPError: 500 Server Error: Internal Server Error for url: https://huggingface.co/api/datasets/ll4ma-lab/grasp-dataset/commit/main (Request ID: Root=1-6787f0b7-66d5bd45413e481c4c2fb22d;670d04ff-65f5-4741-a353-2eacc47a3928)
Internal Error - We're working hard to fix this as soon as possible!`
### Steps to reproduce the bug
I am pushing a Dataset in a loop via push_to_hub API
### Expected behavior
It worked fine until it stopped working suddenly.
Expected behavior: It should start working again
### Environment info
- `datasets` version: 3.2.0
- Platform: Linux-4.18.0-477.15.1.el8_8.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.0
- `huggingface_hub` version: 0.27.1
- PyArrow version: 18.1.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0
|
open
|
2025-01-15T18:23:02Z
|
2025-01-15T20:06:05Z
|
https://github.com/huggingface/datasets/issues/7371
|
[] |
martinmatak
| 1
|
FlareSolverr/FlareSolverr
|
api
| 580
|
can not run FlareSolverr. auto stop if I run it.
|
can not run FlareSolverr
### Environment
* **FlareSolverr version**: 2.2.10
* **Last working FlareSolverr version**:
* **Operating system**: qnap QTS 5.0.1.2194
* **Are you using Docker**: [yes/no] yes
* **FlareSolverr User-Agent (see log traces or / endpoint)**:
* **Are you using a proxy or VPN?** [yes/no] no
* **Are you using Captcha Solver:** [yes/no] no
* **If using captcha solver, which one:**
* **URL to test this issue:**
### Description
[List steps to reproduce the error and details on what happens and what you expected to happen]
### Logged Error Messages
[Place any relevant error messages you noticed from the logs here.]
[Make sure you attach the full logs with your personal information removed in case we need more information]
### Screenshots

[Place any screenshots of the issue here if needed]
|
closed
|
2022-11-11T02:31:24Z
|
2022-11-12T04:24:03Z
|
https://github.com/FlareSolverr/FlareSolverr/issues/580
|
[
"more information needed"
] |
dongxi8
| 7
|
vimalloc/flask-jwt-extended
|
flask
| 167
|
How to start server
|
Hey everyone,
Successfully installed the dependencies in requirements.txt, But i am having difficulty in server starting.
Any help will be greatly appreciated.
|
closed
|
2018-07-05T10:39:33Z
|
2018-07-21T02:57:32Z
|
https://github.com/vimalloc/flask-jwt-extended/issues/167
|
[] |
shafikhaan
| 1
|
koxudaxi/datamodel-code-generator
|
fastapi
| 1,841
|
linkml export support?
|
I discovered your project from https://github.com/gold-standard-phantoms/bids-pydantic/ which produces pydantic model from our BIDS standard metadata descriptor files such as https://github.com/bids-standard/bids-schema/blob/main/versions/1.9.0/schema/objects/metadata.yaml after a few [ad-hoc transformations](https://github.com/bids-standard/bids-schema/blob/main/versions/1.9.0/schema/objects/metadata.yaml) using your library. But there is now relatively new kid in the sandbox -- the https://linkml.io/ which provides a metadata description in yaml files expressive enough to then export in a [variety underlying serializations like pydantic , jsonschema etc](https://linkml.io/linkml/generators/index.html).
Thought to drop here to possibly see adoption for the "export" functionality.
|
open
|
2024-02-06T23:50:24Z
|
2024-02-06T23:50:24Z
|
https://github.com/koxudaxi/datamodel-code-generator/issues/1841
|
[] |
yarikoptic
| 0
|
Evil0ctal/Douyin_TikTok_Download_API
|
fastapi
| 233
|
接口报 500 错误
|
https://api.douyin.wtf/tiktok_video_data?video_id=7257925577091386642
这个接口报 500 错误 Internal Server Error
|
closed
|
2023-08-04T03:17:01Z
|
2023-08-04T09:28:09Z
|
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/233
|
[
"BUG"
] |
419606974
| 2
|
jupyterhub/repo2docker
|
jupyter
| 901
|
Spurious numpy version gets installed
|
### Bug description
An [archived repo of mine](https://github.com/mdeff/ntds_2016) requires python 3.5 (specified in `runtime.txt`) and numpy 1.11.1 (specified in `requirements.txt`). While it used to work, the repo now fails to build on <https://mybinder.org> as it somehow tries to use numpy 1.19.0rc1, which is not compatible with python 3.5. (Why a numpy rc would ever be installed?)
#### Expected behaviour
Only numpy 1.11.1 gets installed.
#### Actual behaviour
Build fails with `SyntaxError` during the installation of `statsmodels==0.6.1` because there's an f-string in numpy 1.19's `setup.py` (introduced in python 3.6).
### How to reproduce
Click <https://mybinder.org/v2/gh/mdeff/ntds_2016/with_outputs?urlpath=tree> and observe the build error. (Why can't I copy text from the build logs?)
|
closed
|
2020-05-28T15:31:19Z
|
2020-06-15T16:50:08Z
|
https://github.com/jupyterhub/repo2docker/issues/901
|
[] |
mdeff
| 5
|
miguelgrinberg/Flask-SocketIO
|
flask
| 1,997
|
Support for once to listen for an event only 1 time (Like official socket.io server has)
|
**Is your feature request related to a problem? Please describe.**
For example to request user input from a socket and to only handle the output once.
**Describe the solution you'd like**
Sending a event to the client requestInput -> {"title": "Enter name of file"}
then in the client ask for input, then send the the respsonse to the server
In the server write the data to for example redis.
**Additional context**
[Add any other context or screenshots about the feature request here.
](https://socket.io/docs/v4/listening-to-events/#socketonceeventname-listener)
|
closed
|
2023-06-30T10:31:07Z
|
2024-01-06T19:10:35Z
|
https://github.com/miguelgrinberg/Flask-SocketIO/issues/1997
|
[
"enhancement"
] |
kingdevnl
| 1
|
chezou/tabula-py
|
pandas
| 255
|
First line is ignored while Reading
|
<!--- Provide a general summary of your changes in the Title above -->
The issue raises when the pdf has the incomplete borders in the first row in the table.
# Summary of your issue
<!-- Write the summary of your issue here -->
# Check list before submit
<!--- Write and check the following questionaries. -->
- [ ] Did you read [FAQ](https://tabula-py.readthedocs.io/en/latest/faq.html)?
Yes
- [ ] (Optional, but really helpful) Your PDF URL: ?
Cant't Upload
- [ ] Paste the output of `import tabula; tabula.environment_info()` on Python REPL: ?
Python version:
3.7.4 (default, Aug 9 2019, 18:34:13) [MSC v.1915 64 bit (AMD64)]
Java version:
openjdk version "11.0.1" 2018-10-16 LTS
OpenJDK Runtime Environment Zulu11.2+3 (build 11.0.1+13-LTS)
OpenJDK 64-Bit Server VM Zulu11.2+3 (build 11.0.1+13-LTS, mixed mode)
tabula-py version: 1.4.1
platform: Windows-10-10.0.18362-SP0
uname:
uname_result(system='Windows', node='APCL20L6F1F2CE4', release='10', version='10.0.18362', machine='AMD64', processor='Intel64 Family 6 Model 142 Stepping 10, GenuineIntel')
linux_distribution: ('MSYS_NT-10.0-18362', '3.0.7', '')
mac_ver: ('', ('', '', ''), '')
If not possible to execute `tabula.environment_info()`, please answer following questions manually.
- [ ] Paste the output of `python --version` command on your terminal: ?
- [ ] Paste the output of `java -version` command on your terminal: ?
- [ ] Does `java -h` command work well?; Ensure your java command is included in `PATH`
- [ ] Write your OS and it's version: ?
# What did you do when you faced the problem?

<!--- Provide your information to reproduce the issue. -->
## Code:
import os
import tabula
os.chdir('C://python')
table = tb.read_pdf("Core_Lib.pdf",multiple_tables = True,pages='35', lattice = True,stream = True)
```
paste your core code which minimum reproducible for the issue
```
## Expected behavior:
<!--- Write your expected results/outputs -->
```
write your expected output
```
## Actual behavior:
<!--- Put the actual results/outputs -->

```
paste your output
```
## Related Issues:
|
closed
|
2020-08-04T07:27:18Z
|
2020-08-04T12:30:54Z
|
https://github.com/chezou/tabula-py/issues/255
|
[
"PDF required"
] |
Jagadeeshkb
| 1
|
JaidedAI/EasyOCR
|
machine-learning
| 1,215
|
English multi-column text is returned out of order
|
Newspapers and other multi-column text blocks appear to be scanned line by line, there is no column detection. The resulting output is in the order of first line from first column, first line from second column, first line from third column, second line from first column, second line from second column, etc.
I am not sure the best approach to fixing this. As a workaround I will just split the input images by column.
|
open
|
2024-02-17T00:38:02Z
|
2024-02-17T00:38:02Z
|
https://github.com/JaidedAI/EasyOCR/issues/1215
|
[] |
mlc-delgado
| 0
|
jmcnamara/XlsxWriter
|
pandas
| 138
|
Change to method signature of workbook.add_format()
|
I am wondering if there is any support for changing the signature for the workbook.add_format() method to the following:
``` python
def add_format(self, properties = {}, **props):
if properties and props:
raise Exception("Use one or the other (make nicer error message)")
....
```
This would allow a (in my opinion) more pleasing and less cluttered method of defining formats. Instead of
``` python
workbook.add_format({ "font_name" : "Tahoma", "font_size" : 8 })
```
we could have
``` python
workbook.add_format(font_name = "Tahoma", font_size = 8)
```
If there is any support for this I could generate a pull request for doing this but don't want to waste anyone's time if there is a reason for not doing this that has been covered in the past. Please advise. Thanks!
|
closed
|
2014-06-26T16:41:37Z
|
2016-09-30T14:16:13Z
|
https://github.com/jmcnamara/XlsxWriter/issues/138
|
[
"feature request"
] |
anthony-tuininga
| 6
|
jpadilla/django-rest-framework-jwt
|
django
| 354
|
Use JWT token auth for https api failed (return "Authentication credentials were not provided")
|
When I just deploy this app with apache2 using http, it works fine. But after I set "SSLEngine on" in apache2, I can only access these apis in browser. Accesses via postman or python-requests or curl is all failed with the same error.

|
closed
|
2017-07-26T13:46:35Z
|
2019-05-17T17:49:19Z
|
https://github.com/jpadilla/django-rest-framework-jwt/issues/354
|
[] |
lizeyan
| 2
|
ultrafunkamsterdam/undetected-chromedriver
|
automation
| 1,023
|
Imperva dedecction
|
Hi. I try all things like stealh-selenium,window.cdc_*,fake user-agent but everytime imperva dedect me.
https://fingerprint.com/products/bot-detection and other bot dedect result is clear.Do you guys any idea about this ?.It's not about my ip because i try at different ip's.Sorry for my bad eng btw.
|
closed
|
2023-02-02T12:44:36Z
|
2023-02-09T01:24:38Z
|
https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1023
|
[] |
melleryb
| 11
|
CorentinJ/Real-Time-Voice-Cloning
|
python
| 322
|
Problem:Run demo_cli.py, Nothing happened!
|
I already run " pip3 install -r requirements.txt" ,it's done.
When I run "demo_cli.py", nothing happened. I repeated it several times, but just nothing. As follows:
(base) C:\Users\Administrator\Real-Time-Voice-Cloning-master\Real-Time-Voice-Cloning-master>python demo_cli.py
(base) C:\Users\Administrator\Real-Time-Voice-Cloning-master\Real-Time-Voice-Cloning-master>
Does anyone know why? Thanks.
|
closed
|
2020-04-15T12:22:32Z
|
2020-04-28T03:33:08Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/322
|
[] |
herorichard
| 3
|
LAION-AI/Open-Assistant
|
machine-learning
| 3,537
|
Lagging
|
There is a bug or lag in the chat, the internet is always full when I want to start the chat, please fix the server again even though my internet is good.
|
closed
|
2023-07-01T09:55:38Z
|
2023-07-04T19:51:32Z
|
https://github.com/LAION-AI/Open-Assistant/issues/3537
|
[
"bug"
] |
M4X4Lmina
| 1
|
aio-libs/aiopg
|
sqlalchemy
| 385
|
Any way to stream results asynchronously?
|
Since psycopg2 doesn't support named cursors in async mode, is there any way to stream results?
Is it possible to use aipog's interface synchronously to stream results? If so, how? Do I need to use raw psycopg2 directly?
|
closed
|
2017-09-11T16:42:23Z
|
2021-03-05T21:18:20Z
|
https://github.com/aio-libs/aiopg/issues/385
|
[] |
Daenyth
| 3
|
erdewit/ib_insync
|
asyncio
| 46
|
doing loops to get historical data and execute order at specific time
|
Hello,
Wondering if there is an easy way to get historical data and execute order at specific time. For example, in the first second of every minute (e.g. 15:00:01, 15:01:01, 15:02:01, 15:03:01...) i want to get some data and execute orders.
Thanks.
|
closed
|
2018-02-28T04:59:36Z
|
2018-02-28T10:38:00Z
|
https://github.com/erdewit/ib_insync/issues/46
|
[] |
dccw
| 1
|
sherlock-project/sherlock
|
python
| 1,336
|
Define JSON Schemas to put at the top of each JSON file
|
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Put x into all boxes (like this [x]) once you have completed what they say.
Make sure complete everything in the checklist.
-->
- [x] I'm reporting a feature request
- [x] I've checked for similar feature requests including closed ones
## Description
<!--
Provide a detailed description of the feature you would like Sherlock to have
-->
We maintain quite a few _huge_ JSON files in this repository.
It's not impossible that human-error does it's thing and we have typos in fields, or property values that don't make sense, or accidentally add redundant properties etc.
I think we should make a JSON Schema that validates the contents of the JSON files, and we can define them at `$schema`.
This will:
1. Add code completion and validation on IDEs and code editors when contributors modify the file, vastly improving the contribution experience.
2. Allow us to strictly define and document all rules / properties that can be set in the JSON files.
3. Maybe later we can add a validation step in CI to ensure the schema matches. (Maybe this could be in code as well for when users use a custom JSON file via the `--json` argument.)
|
closed
|
2022-05-07T07:44:20Z
|
2024-05-07T19:26:00Z
|
https://github.com/sherlock-project/sherlock/issues/1336
|
[
"enhancement"
] |
SethFalco
| 0
|
plotly/dash
|
data-science
| 2,615
|
[BUG] DataTable Failed to execute 'querySelector' on 'Element' - Line break in column id
|
```
Error: Failed to execute 'querySelector' on 'Element': 'tr:nth-of-type(1) th[data-dash-column="ccc
ddd"]:not(.phantom-cell)' is not a valid selector.
```
This happens when the column id has a line break (e.g. `{'name':'aaabbb', 'id':'aaa\nbbb'}`) but not when the line break is in the name (e.g. `{'name':'aaa\nbbb', 'id':'aaabbb'}`).
**Traceback:**
```
Error: Failed to execute 'querySelector' on 'Element': 'tr:nth-of-type(1) th[data-dash-column="ccc
ddd"]:not(.phantom-cell)' is not a valid selector.
at u.value (http://127.0.0.1:8050/_dash-component-suites/dash/dash_table/async-table.js:2:334450)
at u.value (http://127.0.0.1:8050/_dash-component-suites/dash/dash_table/async-table.js:2:327840)
at commitLifeCycles (http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom@16.v2_10_2m1687358438.14.0.js:19970:24)
at commitLayoutEffects (http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom@16.v2_10_2m1687358438.14.0.js:22938:9)
at HTMLUnknownElement.callCallback (http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom@16.v2_10_2m1687358438.14.0.js:182:16)
at Object.invokeGuardedCallbackDev (http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom@16.v2_10_2m1687358438.14.0.js:231:18)
at invokeGuardedCallback (http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom@16.v2_10_2m1687358438.14.0.js:286:33)
at commitRootImpl (http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom@16.v2_10_2m1687358438.14.0.js:22676:11)
at unstable_runWithPriority (http://127.0.0.1:8050/_dash-component-suites/dash/deps/react@16.v2_10_2m1687358438.14.0.js:2685:14)
at runWithPriority$1 (http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom@16.v2_10_2m1687358438.14.0.js:11174:12)
```
**MRE:**
```
from dash import Dash, dash_table, html
import pandas as pd
df = pd.DataFrame({
"aaa\nbbb":[4,5,6],
"ccc\nddd":[1,2,3]
})
app = Dash(__name__)
app.layout = html.Div([
dash_table.DataTable(
id='table',
columns=[
{"name": "aaabbb", "id": "aaa\nbbb"} ,
{"name": "cccddd", "id": "ccc\nddd"}
],
data=df.to_dict('records'),
)
])
if __name__ == '__main__':
app.run(debug=True)
```
**Env:**
Python 3.9.0
dash==2.10.2
|
closed
|
2023-08-09T11:13:05Z
|
2023-08-14T17:03:32Z
|
https://github.com/plotly/dash/issues/2615
|
[
"bug",
"dash-data-table"
] |
celia-lm
| 0
|
allenai/allennlp
|
nlp
| 5,537
|
Using evaluate command for multi-task model
|
<!--
Please fill this template entirely and do not erase any of it.
We reserve the right to close without a response bug reports which are incomplete.
If you have a question rather than a bug, please ask on [Stack Overflow](https://stackoverflow.com/questions/tagged/allennlp) rather than posting an issue here.
-->
## Checklist
<!-- To check an item on the list replace [ ] with [x]. -->
- [x] I have verified that the issue exists against the `main` branch of AllenNLP.
- [x] I have read the relevant section in the [contribution guide](https://github.com/allenai/allennlp/blob/main/CONTRIBUTING.md#bug-fixes-and-new-features) on reporting bugs.
- [x] I have checked the [issues list](https://github.com/allenai/allennlp/issues) for similar or identical bug reports.
- [x] I have checked the [pull requests list](https://github.com/allenai/allennlp/pulls) for existing proposed fixes.
- [x] I have checked the [CHANGELOG](https://github.com/allenai/allennlp/blob/main/CHANGELOG.md) and the [commit log](https://github.com/allenai/allennlp/commits/main) to find out if the bug was already fixed in the main branch.
- [x] I have included in the "Description" section below a traceback from any exceptions related to this bug.
- [x] I have included in the "Related issues or possible duplicates" section beloew all related issues and possible duplicate issues (If there are none, check this box anyway).
- [x] I have included in the "Environment" section below the name of the operating system and Python version that I was using when I discovered this bug.
- [x] I have included in the "Environment" section below the output of `pip freeze`.
- [x] I have included in the "Steps to reproduce" section below a minimally reproducible example.
## Description
evaluate command can not support multitask that input_file only takes the file name as input.
But multi-task reader needs Dic.
<!-- Please provide a clear and concise description of what the bug is here. -->
<details>
<summary><b>Python traceback:</b></summary>
<p>
<!-- Paste the traceback from any exception (if there was one) in between the next two lines below -->
(allennlp2_8) D:\WorkSpace\GeoQA-with-UniT -2_8>allennlp evaluate save/test test_data_path --include-package NGS_Aux --cuda-device 0
2022-01-07 09:38:22,837 - INFO - allennlp.models.archival - loading archive file save/test
2022-01-07 09:38:23,469 - INFO - allennlp.data.vocabulary - Loading token dictionary from save/test\vocabulary.
##### Checkpoint Loaded! #####
2022-01-07 09:38:28,271 - INFO - allennlp.modules.token_embedders.embedding - Loading a model trained before embedding extension was implemented; pass an explicit vocab namespace if you wan
t to extend the vocabulary.
2022-01-07 09:38:29,051 - INFO - allennlp.common.checks - Pytorch version: 1.10.0+cu113
2022-01-07 09:38:29,053 - INFO - allennlp.commands.evaluate - Reading evaluation data from test_data_path
Traceback (most recent call last):
File "D:\Anaconda\envs\allennlp2_8\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "D:\Anaconda\envs\allennlp2_8\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "D:\Anaconda\envs\allennlp2_8\Scripts\allennlp.exe\__main__.py", line 7, in <module>
File "D:\Anaconda\envs\allennlp2_8\lib\site-packages\allennlp\__main__.py", line 46, in run
main(prog="allennlp")
File "D:\Anaconda\envs\allennlp2_8\lib\site-packages\allennlp\commands\__init__.py", line 123, in main
args.func(args)
File "D:\Anaconda\envs\allennlp2_8\lib\site-packages\allennlp\commands\evaluate.py", line 189, in evaluate_from_args
params=data_loader_params, reader=dataset_reader, data_path=evaluation_data_path
File "D:\Anaconda\envs\allennlp2_8\lib\site-packages\allennlp\common\from_params.py", line 656, in from_params
**extras,
File "D:\Anaconda\envs\allennlp2_8\lib\site-packages\allennlp\common\from_params.py", line 686, in from_params
return constructor_to_call(**kwargs) # type: ignore
File "D:\Anaconda\envs\allennlp2_8\lib\site-packages\allennlp\data\data_loaders\multitask_data_loader.py", line 143, in __init__
if self.readers.keys() != self.data_paths.keys():
AttributeError: 'str' object has no attribute 'keys'
</p>
</details>
## Related issues or possible duplicates
- None
## Environment
<!-- Provide the name of operating system below (e.g. OS X, Linux) -->
Windows:
aiohttp 3.8.0
aiosignal 1.2.0
allennlp 2.8.0
argcomplete 1.12.3
argon2-cffi 21.1.0
async-timeout 4.0.0
asynctest 0.13.0
atomicwrites 1.4.0
attrs 21.2.0
backcall 0.2.0
backports.csv 1.0.7
base58 2.1.1
beautifulsoup4 4.10.0
bleach 4.1.0
blis 0.7.5
boto3 1.19.12
botocore 1.22.12
cached-path 0.3.2
cached-property 1.5.2
cachetools 4.2.4
catalogue 1.0.0
certifi 2021.10.8
cffi 1.15.0
chardet 4.0.0
charset-normalizer 2.0.7
checklist 0.0.11
cheroot 8.5.2
CherryPy 18.6.1
click 8.0.3
colorama 0.4.4
configparser 5.1.0
cryptography 35.0.0
cymem 2.0.6
Cython 0.29.23
datasets 1.15.1
debugpy 1.5.1
decorator 5.1.0
defusedxml 0.7.1
dill 0.3.4
docker-pycreds 0.4.0
en-core-web-sm 2.3.0
entrypoints 0.3
fairscale 0.4.0
feedparser 6.0.8
filelock 3.3.2
frozenlist 1.2.0
fsspec 2021.11.0
future 0.18.2
gensim 4.1.2
gitdb 4.0.9
GitPython 3.1.24
google-api-core 2.2.2
google-auth 2.3.3
google-cloud-core 2.1.0
google-cloud-storage 1.42.3
google-crc32c 1.3.0
google-resumable-media 2.1.0
googleapis-common-protos 1.53.0
h5py 3.5.0
huggingface-hub 0.1.1
idna 3.3
importlib-metadata 4.8.1
importlib-resources 5.4.0
iniconfig 1.1.1
ipykernel 6.5.0
ipython 7.29.0
ipython-genutils 0.2.0
ipywidgets 7.6.5
iso-639 0.4.5
jaraco.classes 3.2.1
jaraco.collections 3.4.0
jaraco.functools 3.4.0
jaraco.text 3.6.0
jedi 0.18.0
jieba 0.42.1
Jinja2 3.0.2
jmespath 0.10.0
joblib 1.1.0
jsonschema 4.2.1
jupyter 1.0.0
jupyter-client 7.0.6
jupyter-console 6.4.0
jupyter-core 4.9.1
jupyterlab-pygments 0.1.2
jupyterlab-widgets 1.0.2
lmdb 1.2.1
lxml 4.6.4
MarkupSafe 2.0.1
matplotlib-inline 0.1.3
mistune 0.8.4
more-itertools 8.10.0
multidict 5.2.0
multiprocess 0.70.12.2
munch 2.5.0
murmurhash 1.0.6
nbclient 0.5.4
nbconvert 6.2.0
nbformat 5.1.3
nest-asyncio 1.5.1
nltk 3.6.5
notebook 6.4.5
numpy 1.21.4
opencv-python 4.5.4.58
overrides 3.1.0
packaging 21.2
pandas 1.3.4
pandocfilters 1.5.0
parso 0.8.2
pathtools 0.1.2
pathy 0.6.1
patternfork-nosql 3.6
pdfminer.six 20211012
pickleshare 0.7.5
Pillow 8.4.0
pip 21.2.4
plac 1.1.3
pluggy 1.0.0
portend 3.0.0
preshed 3.0.6
prometheus-client 0.12.0
promise 2.3
prompt-toolkit 3.0.22
protobuf 3.19.1
psutil 5.8.0
py 1.11.0
pyarrow 6.0.0
pyasn1 0.4.8
pyasn1-modules 0.2.8
pycparser 2.21
pydantic 1.8.2
Pygments 2.10.0
pyparsing 2.4.7
pyrsistent 0.18.0
pytest 6.2.5
python-dateutil 2.8.2
python-docx 0.8.11
pytz 2021.3
pywin32 302
pywinpty 1.1.5
PyYAML 6.0
pyzmq 22.3.0
qtconsole 5.1.1
QtPy 1.11.2
regex 2021.11.2
requests 2.26.0
rsa 4.7.2
s3transfer 0.5.0
sacremoses 0.0.46
scikit-learn 1.0.1
scipy 1.7.2
Send2Trash 1.8.0
sentencepiece 0.1.96
sentry-sdk 1.4.3
setuptools 58.0.4
sgmllib3k 1.0.0
shortuuid 1.0.1
six 1.16.0
smart-open 5.2.1
smmap 5.0.0
soupsieve 2.3
spacy 2.3.7
spacy-legacy 3.0.8
sqlitedict 1.7.0
srsly 1.0.5
subprocess32 3.5.4
tempora 4.1.2
tensorboardX 2.4
termcolor 1.1.0
terminado 0.12.1
testpath 0.5.0
thinc 7.4.5
threadpoolctl 3.0.0
tokenizers 0.10.3
toml 0.10.2
torch 1.10.0+cu113
torchaudio 0.10.0+cu113
torchvision 0.11.1+cu113
tornado 6.1
tqdm 4.62.3
traitlets 5.1.1
transformers 4.12.3
typer 0.4.0
typing-extensions 3.10.0.2
urllib3 1.26.7
wandb 0.12.6
wasabi 0.8.2
wcwidth 0.2.5
webencodings 0.5.1
wheel 0.37.0
widgetsnbextension 3.5.2
wincertstore 0.2
xxhash 2.0.2
yarl 1.7.2
yaspin 2.1.0
zc.lockfile 2.0
zipp 3.6.0
<!-- Provide the Python version you were using (e.g. 3.7.1) -->
Python version:
3.7
<details>
<summary><b>Output of <code>pip freeze</code>:</b></summary>
<p>
<!-- Paste the output of `pip freeze` in between the next two lines below -->
```
```
</p>
</details>
## Steps to reproduce
<details>
<summary><b>Example source:</b></summary>
<p>
<!-- Add a fully runnable example in between the next two lines below that will reproduce the bug -->
```
```
</p>
</details>
|
closed
|
2022-01-07T02:16:46Z
|
2022-02-28T21:42:05Z
|
https://github.com/allenai/allennlp/issues/5537
|
[
"bug",
"stale"
] |
ICanFlyGFC
| 10
|
dask/dask
|
numpy
| 11,307
|
Out of memory
|
**Describe the issue**:
It looks like `dask-expr` optimizes the query wrongly. Adding the extra `persist()` fixes the OOM memory:
```text
---------------------------------------------------------------------------
MemoryError Traceback (most recent call last)
Cell In[5], line 100
87 display(
88 train_df,
89 test_df,
90 sku_groups_df,
91 )
93 display(
94 train_df.head(),
95 test_df.head(),
96 sku_groups_df.head(),
97 )
99 display(
--> 100 *dask.compute(
101 sku_groups_df['group'].value_counts(),
102 train_df['group'].value_counts(),
103 test_df['group'].value_counts(),
104 )
105 )
107 print(train_df.dtypes)
108 print(test_df.dtypes)
File ~/src/.venv/lib/python3.10/site-packages/dask/base.py:661, in compute(traverse, optimize_graph, scheduler, get, *args, **kwargs)
658 postcomputes.append(x.__dask_postcompute__())
660 with shorten_traceback():
--> 661 results = schedule(dsk, keys, **kwargs)
663 return repack([f(r, *a) for r, (f, a) in zip(results, postcomputes)])
File ~/src/.venv/lib/python3.10/site-packages/distributed/client.py:2234, in Client._gather(self, futures, errors, direct, local_worker)
2232 exc = CancelledError(key)
2233 else:
-> 2234 raise exception.with_traceback(traceback)
2235 raise exc
2236 if errors == "skip":
MemoryError: Task ('repartitionsize-a141886386f8cc3cbacc3d585efdbc07', 470) has 22.35 GiB worth of input dependencies, but worker tls://10.164.0.41:40427 has memory_limit set to 15.64 GiB.
```
Working example:
```python
import dask
import dask.dataframe as dd
groups_df = (
dd
.read_parquet(
f'gs://groups.parquet/*',
columns=['ItemID', 'group'],
)
.astype({'group': 'category'})
.repartition(partition_size='100MB')
.persist()
)
def read_dataset(path: str) -> dd.DataFrame:
df = (
dd.read_parquet(
path,
# We have to request existing cols before renaming
columns=['ItemID', 'A_n/a', 'AT_n/a'],
)
.rename(columns={'A_n/a': 'A_na', 'AT_n/a': 'AT_na' })
# This fixes weired issue with dask-expr. I faced with OOM
.persist()
)
return (
df
.merge(
groups_df,
on='ID',
how='left'
)
.query('group in @groups_to_include',
local_dict={'groups_to_include': ['group1', 'group2', 'group3', 'group4', 'group4']})
.repartition(partition_size='100MB')
)
train_df = read_dataset(f'gs://train.parquet/*')
test_df = read_dataset(f'gs://test.parquet/*')
train_df.pprint()
train_df, test_df = dask.persist(train_df, test_df)
display(
train_df,
test_df,
groups_df,
)
display(
train_df.head(),
test_df.head(),
groups_df.head(),
)
display(
*dask.compute(
groups_df['sku_group'].value_counts(),
train_df['sku_group'].value_counts(),
test_df['sku_group'].value_counts(),
)
)
```
`pprint`:
```text
Repartition: partition_size='100MB'None
Query: _expr='group in @groups_to_include' expr_kwargs={'local_dict': {'groups_to_include': ['group1', 'group2', 'group3', 'group4', 'group4']}}
Merge: how='left' left_on='ItemID' right_on='ItemID'
FromGraph: layer={('read_parquet-operation-fe85241c2cbf3f4e3c47c8ce4d1bb145', 0): <Future: pending, key: ('read_parquet-operation-fe85241c2cbf3f4e3c47c8ce4d1bb145', 0)>, ('read_parquet-operation-fe85241c2cbf3f4e3c47c8ce4d1bb145', 1): <Future: pending, key: ('read_parquet-operation-fe85241c2cbf3f4e3c47c8ce4d1bb145', 1)>, ('read_parquet-operation-fe85241c2cbf3f4e3c47c8ce4d1bb145', 2): <Future: pending, key: ('read_parquet-operation-fe85241c2cbf3f4e3c47c8ce4d1bb145', 2)>, ..., ('read_parquet-operation-fe85241c2cbf3f4e3c47c8ce4d1bb145', 941): <Future: finished, type: pandas.core.frame.DataFrame, key: ('read_parquet-operation-fe85241c2cbf3f4e3c47c8ce4d1bb145', 941)>, ('read_parquet-operation-fe85241c2cbf3f4e3c47c8ce4d1bb145', 942): <Future: finished, type: pandas.core.frame.DataFrame, key: ('read_parquet-operation-fe85241c2cbf3f4e3c47c8ce4d1bb145', 942)>} _meta='<pandas>' divisions=(None, None, ..., None, None) keys=[('read_parquet-operation-fe85241c2cbf3f4e3c47c8ce4d1bb145', 0), ('read_parquet-operation-fe85241c2cbf3f4e3c47c8ce4d1bb145', 1), ('read_parquet-operation-fe85241c2cbf3f4e3c47c8ce4d1bb145', 2), ('read_parquet-operation-fe85241c2cbf3f4e3c47c8ce4d1bb145', 3), ..., ('read_parquet-operation-fe85241c2cbf3f4e3c47c8ce4d1bb145', 934), ('read_parquet-operation-fe85241c2cbf3f4e3c47c8ce4d1bb145', 935), ('read_parquet-operation-fe85241c2cbf3f4e3c47c8ce4d1bb145', 936), ('read_parquet-operation-fe85241c2cbf3f4e3c47c8ce4d1bb145', 937), ('read_parquet-operation-fe85241c2cbf3f4e3c47c8ce4d1bb145', 938), ('read_parquet-operation-fe85241c2cbf3f4e3c47c8ce4d1bb145', 939), ('read_parquet-operation-fe85241c2cbf3f4e3c47c8ce4d1bb145', 940), ('read_parquet-operation-fe85241c2cbf3f4e3c47c8ce4d1bb145', 941), ('read_parquet-operation-fe85241c2cbf3f4e3c47c8ce4d1bb145', 942)] name_prefix='read_parquet-operation'
FromGraph: layer={('repartitionsize-a7f9918679644d0763b0995e4cc22887', 0): <Future: finished, type: pandas.core.frame.DataFrame, key: ('repartitionsize-a7f9918679644d0763b0995e4cc22887', 0)>} _meta='<pandas>' divisions=(None, None) keys=[('repartitionsize-a7f9918679644d0763b0995e4cc22887', 0)] name_prefix='repartitionsize'
```
An example that leads to OOM:
```python
# ...
.rename(columns={'A_n/a': 'A_na', 'AT_n/a': 'AT_na' })
# Commeting out this line leads to OOM
#.persist()
#...
```
`pprint`:
```text
Repartition: partition_size='100MB'None
Query: _expr='sku_group in @sku_groups_to_include' expr_kwargs={'local_dict': {'sku_groups_to_include': ['no_sales', 'very_intermittent', 'intermittent', 'frequent', 'very_frequent']}}
Merge: how='left' left_on='InventoryID' right_on='InventoryID'
RenameFrame: columns={'A_n/a': 'A_na', 'AT_n/a': 'AT_na', ... }
ReadParquetFSSpec: path='gs://ecentria-ai-team-352717-demand-forecasting/ai-459/data/ecentria/features/train_features.parquet/*' columns=['CN', 'lag1', 'A_n/a', 'AT_n/a', ...] kwargs={'dtype_backend': None}
FromGraph: layer={('repartitionsize-a7f9918679644d0763b0995e4cc22887', 0): <Future: finished, type: pandas.core.frame.DataFrame, key: ('repartitionsize-a7f9918679644d0763b0995e4cc22887', 0)>} _meta='<pandas>' divisions=(None, None) keys=[('repartitionsize-a7f9918679644d0763b0995e4cc22887', 0)] name_prefix='repartitionsize'
```
**Minimal Complete Verifiable Example**:
TBD
**Anything else we need to know?**:
**Environment**:
- Dask version:
```
dask = "2024.5.2"
dask-expr = "1.1.2"
```
- Python version: 3.10
- Operating System: WSL
- Install method (conda, pip, source): poetry
|
open
|
2024-08-13T13:34:44Z
|
2024-08-19T10:21:22Z
|
https://github.com/dask/dask/issues/11307
|
[
"dataframe",
"dask-expr"
] |
dbalabka
| 12
|
microsoft/nni
|
pytorch
| 5,609
|
Tensorboard logging
|
**Describe the issue**:
Hello, I'm using NNI v3.0 and trying to log using tensorboard. On the code I create the logging directory as `log_dir = os.path.join(os.environ["NNI_OUTPUT_DIR"], "tensorboard")` but when I try to open tensorboard on the webgui doesn't appear any logging. Looking at the output directory I noticed that the NNI_OUTPUT_DIR points towards to
> nni-experiments/dh1ou4w7/environments/local-env/trials/dqOLE/tensorboard
but tensorboard is trying to read
> nni-experiments/dh1ou4w7/trials/QlXHm/output/tensorboard
How can I get the correct one and what's the difference of both?
|
open
|
2023-06-13T15:25:48Z
|
2024-01-16T08:16:41Z
|
https://github.com/microsoft/nni/issues/5609
|
[] |
ferqui
| 2
|
HumanSignal/labelImg
|
deep-learning
| 109
|
a problem for V1.3.3 and V1.3.2 when using Windows
|
when trying to use V1.3.3 and V1.3.2 under windows, it always exit without any problem. what should i do?
|
closed
|
2017-07-06T07:40:03Z
|
2017-07-25T01:42:19Z
|
https://github.com/HumanSignal/labelImg/issues/109
|
[] |
wzx918
| 1
|
datadvance/DjangoChannelsGraphqlWs
|
graphql
| 26
|
Support for Django-GraphQL-Social-Auth
|
There are two awesome projects called:
1. https://github.com/flavors/django-graphql-social-auth
2. https://github.com/flavors/django-graphql-jwt
I am trying to use both the django-graphql-social-auth and the django-channels-graphql-ws packages in conjunction with each other. Ultimately, I would like to have a django channels websocket graphql api backend and a JS front-end using social auth.
When using dgql-social-auth with this library, I run into an issue with the request going through the python social auth library here: https://github.com/python-social-auth/social-app-django/blob/c00d23c2b45c3317bd35b15ad1b959338689cef8/social_django/strategy.py#L51
```python
def request_data(self, merge=True):
if not self.request:
return {}
if merge:
data = self.request.GET.copy()
data.update(self.request.POST)
elif self.request.method == 'POST':
data = self.request.POST
else:
data = self.request.GET
return data
```
I see the following error:
> An error occurred while resolving field Mutation.socialAuth
> Traceback (most recent call last):
> File "channels_graphql_ws\scope_as_context.py", line 42, in __getattr__
> return self._scope[name]
> KeyError: 'GET'
I believe this is because of how the websocket request object is formed versus a traditional request object that python social auth would expect.
My code includes adding this mutation to the example in this lib
```python
class Mutation(graphene.ObjectType):
"""Root GraphQL mutation."""
send_chat_message = SendChatMessage.Field()
login = Login.Field()
social_auth = graphql_social_auth.SocialAuthJWT.Field()
```
Using this gql
```graphql
mutation SocialAuth($provider: String!, $accessToken: String!) {
socialAuth(provider: $provider, accessToken: $accessToken) {
social {
uid
}
token
}
}
```
Is there any way to get these two to work together well?
|
closed
|
2019-10-02T03:20:56Z
|
2020-12-05T23:52:42Z
|
https://github.com/datadvance/DjangoChannelsGraphqlWs/issues/26
|
[
"question"
] |
kingtimm
| 3
|
developmentseed/lonboard
|
jupyter
| 134
|
Sync the clicked index back to Python
|
It would be great, besides a tooltip to display on the JS side, to sync the index of the object that was clicked. Then the user can do `gdf.iloc[map_.clicked_index]` to retrieve the specific row
Note that this can probably be an array of indices?
|
closed
|
2023-10-18T01:23:48Z
|
2023-11-28T18:08:18Z
|
https://github.com/developmentseed/lonboard/issues/134
|
[] |
kylebarron
| 0
|
gradio-app/gradio
|
deep-learning
| 10,396
|
Using S3 Presgned URL's with GR.Model3D and GR.VIDEO was fail
|
### Describe the bug
**_Bug Report_**:_Presigned URL Error with Gradio Components_
**_Encountering an `OSError` when using presigned URLs with specific Gradio components, such as `gr.Video` or `gr.Model3D`. The error details are as follows_**
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
1. Generate a presigned URL for a file stored in S3.
2. Use the presigned URL as an input to a Gradio component, such as `gr.Video` or `gr.Model3D`.
3. Observe the error during file handling.
### Screenshot
_No response_
### Logs
```shell
OSError: [Errno 22] Invalid argument: 'C:\\Users\\Morningstar\\AppData\\Local\\Temp\\gradio\\<hashed_filename>\\.mp4?response-content-disposition=inline&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Security-Token=<truncated_token>&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=<truncated_credential>&X-Amz-Date=20250116T133649Z&X-Amz-Expires=1800&X-Amz-SignedHeaders=host&X-Amz-Signature=<truncated_signature>'
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Windows
gradio version: 5.12.0
gradio_client version: 1.5.4
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.8.0
audioop-lts: 0.2.1
fastapi: 0.115.6
ffmpy: 0.5.0
gradio-client==1.5.4 is not installed.
httpx: 0.28.1
huggingface-hub: 0.27.1
jinja2: 3.1.5
markupsafe: 2.1.5
numpy: 2.2.1
orjson: 3.10.13
packaging: 24.2
pandas: 2.2.3
pillow: 10.4.0
pydantic: 2.10.4
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.2
ruff: 0.8.6
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.41.3
tomlkit: 0.12.0
typer: 0.15.1
typing-extensions: 4.12.2
urllib3: 2.3.0
uvicorn: 0.34.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.12.0
httpx: 0.28.1
huggingface-hub: 0.27.1
packaging: 24.2
typing-extensions: 4.12.2
websockets: 12.0
```
### Severity
I can work around it
|
closed
|
2025-01-21T03:56:21Z
|
2025-01-21T06:42:16Z
|
https://github.com/gradio-app/gradio/issues/10396
|
[
"bug"
] |
AshrafMorningstar
| 1
|
python-restx/flask-restx
|
flask
| 345
|
Improve documentation of `fields` arguments
|
**Is your feature request related to a problem? Please describe.**
Currently, it's quite unclear which arguments are accepted by which `flask_restx.fields` functions. For example, for `fields.Float`, [none are explicitly documented](https://flask-restx.readthedocs.io/en/latest/api.html#flask_restx.fields.Float). To find out how to specify exclusiveMinimum, I have to read the [source code](https://flask-restx.readthedocs.io/en/latest/_modules/flask_restx/fields.html) of `fields.Float`, which inherits `fields.NumberMixin`, which inherits `fields.MinMaxMixin`, where I finally find out that an openAPI `exclusiveMinimum` is specified with the RestX argument `exclusiveMin`.
**Describe the solution you'd like**
Improved documentation or code changes. Really any improvement would be welcome.
|
open
|
2021-06-24T13:41:19Z
|
2022-07-26T13:37:03Z
|
https://github.com/python-restx/flask-restx/issues/345
|
[
"enhancement"
] |
thomsentner
| 0
|
d2l-ai/d2l-en
|
computer-vision
| 2,134
|
A mistake in seq2seq prediction implementation?
|
https://github.com/d2l-ai/d2l-en/blob/9e4fbb1e97f4e0b3919563073344368755fe205b/d2l/torch.py#L2996-L3030
**Bugs here:**
``` Python
for _ in range(num_steps):
Y, dec_state = net.decoder(dec_X, dec_state)
```
As you can see here, dec_state will update in every loop. But it not only affects the hidden state for rnn, but also the context vector for each step. (I don't know why Seq2SeqDecoder seems not implemented in [d2l/torch.py](https://github.com/d2l-ai/d2l-en/blob/9e4fbb1e97f4e0b3919563073344368755fe205b/d2l/torch.py)), In chapter 9.7.2, here wrote:
https://github.com/d2l-ai/d2l-en/blob/9e4fbb1e97f4e0b3919563073344368755fe205b/chapter_recurrent-modern/seq2seq.md?plain=1#L383-L411

And according to this graph, the correct approach should be to keep the context vector always constant at each time step.
(there is no problem with this method when training, because the target sentence is complete and context vector is already broadcast in every time step)
**My Solution(Not tested):**
Modify Seq2SeqDecoder:
``` Python
class Seq2SeqDecoder(d2l.Decoder):
"""The RNN decoder for sequence to sequence learning."""
def __init__(self, vocab_size, embed_size, num_hiddens, num_layers,
dropout=0):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embed_size)
self.rnn = d2l.GRU(embed_size+num_hiddens, num_hiddens,
num_layers, dropout)
self.dense = nn.LazyLinear(vocab_size)
self.apply(init_seq2seq)
def init_state(self, enc_outputs, *args):
return enc_outputs[1]
# Add a parameter here:
def forward(self, X, enc_state, enc_final_layer_output_at_last_step):
# X shape: (batch_size, num_steps)
# embs shape: (num_steps, batch_size, embed_size)
embs = self.embedding(d2l.astype(d2l.transpose(X), d2l.int32))
# context shape: (batch_size, num_hiddens)
# Broadcast context to (num_steps, batch_size, num_hiddens)
context = enc_final_layer_output_at_last_step.repeat(embs.shape[0], 1, 1)
# Concat at the feature dimension
embs_and_context = d2l.concat((embs, context), -1)
outputs, state = self.rnn(embs_and_context, enc_state)
outputs = d2l.swapaxes(self.dense(outputs), 0, 1)
# outputs shape: (batch_size, num_steps, vocab_size)
# state shape: (num_layers, batch_size, num_hiddens)
return outputs, state
```
Modify [EncoderDecoder](https://github.com/d2l-ai/d2l-en/blob/9e4fbb1e97f4e0b3919563073344368755fe205b/d2l/torch.py#L864-L868):
``` Python
def forward(self, enc_X, dec_X, *args):
enc_outputs = self.encoder(enc_X, *args)
dec_state = self.decoder.init_state(enc_outputs, *args)
# Return decoder output only
return self.decoder(dec_X, dec_state, enc_outputs[1][-1])[0]
```
Modify predict_seq2seq():
``` Python
for _ in range(num_steps):
Y, dec_state = net.decoder(dec_X, dec_state, enc_outputs[1][-1])
```
|
closed
|
2022-05-16T09:12:58Z
|
2022-12-14T04:24:45Z
|
https://github.com/d2l-ai/d2l-en/issues/2134
|
[] |
zhmou
| 2
|
pydata/pandas-datareader
|
pandas
| 258
|
Yahoo datasource issue
|
Hello,
It looks like Yahoo single stock data has stopped as of Friday - when I try and download last night's closes it doesn't return anything.
Has the website changed or has something happened I should be aware of ?
Thanks
|
closed
|
2016-11-08T10:11:07Z
|
2017-01-08T22:10:00Z
|
https://github.com/pydata/pandas-datareader/issues/258
|
[] |
hnasser95
| 7
|
paperless-ngx/paperless-ngx
|
machine-learning
| 9,167
|
[BUG] Saved View using Tag shows no entries on Dashboard when Showed Columns is "Default" and displayed as "Table"
|
### Description
When a Saved View is displayed on the Dashboard, it shows no entries when all displayed columns are removed to set the "Default" column config.
I guess that this behavior has changed at some point, as I have Saved Views that have been created similarly already quite some time ago and those show the documents with the default columns on the Dashboard correctly.
Only newly created Saved Views seem affected.
### Steps to reproduce
1. create a Tag eg. "Todo"
2. tag some documents with this "Todo" tag
3. create a saved view based on the "Todotag, show it on the Dashboard

4. check that the tagged documents are properly displayed on the Dashboard

5. open the Saved View screen and check the created view for the "Todo" Tag

6. remove all separately present columns so that "_Default_" is shown and change the "Display as" to "Table"

7. go back to the Dashboard, see that in the "Tag: Todo" view no document is displayed anymore

### Webserver logs
```bash
(no errors or logs are written to the webserver log, when the "Seps to reproduce" are executed)
```
### Browser logs
```bash
```
### Paperless-ngx version
2.14.7
### Host OS
Raspberry Pi 4
### Installation method
Docker - official image
### System status
```json
{
"pngx_version": "2.14.7",
"server_os": "Linux-6.6.74-v8+-aarch64-with-glibc2.36",
"install_type": "docker",
"storage": {
"total": 114440536064,
"available": 89372897280
},
"database": {
"type": "postgresql",
"url": "paperless",
"status": "OK",
"error": null,
"migration_status": {
"latest_migration": "mfa.0003_authenticator_type_uniq",
"unapplied_migrations": []
}
},
"tasks": {
"redis_url": "redis://broker:6379",
"redis_status": "OK",
"redis_error": null,
"celery_status": "OK",
"index_status": "OK",
"index_last_modified": "2025-02-19T21:31:12.651623+01:00",
"index_error": null,
"classifier_status": "OK",
"classifier_last_trained": "2025-02-19T21:08:04.472993Z",
"classifier_error": null
}
}
```
### Browser
Firefox
### Configuration changes
PAPERLESS_OCR_LANGUAGES: deu eng
PAPERLESS_FILENAME_FORMAT: {created_year}/{created_month}/{created_year}{created_month}{created_day}-{correspondent}{document_type}{title}
PAPERLESS_CONSUMER_POLLING_DELAY: 15
PAPERLESS_CONSUMER_POLLING_RETRY_COUNT: 10
PAPERLESS_OCR_USER_ARGS: {"invalidate_digital_signatures": true}
### Please confirm the following
- [x] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [x] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [x] I have already searched for relevant existing issues and discussions before opening this report.
- [x] I have updated the title field above with a concise description.
|
closed
|
2025-02-19T21:51:26Z
|
2025-03-22T03:13:26Z
|
https://github.com/paperless-ngx/paperless-ngx/issues/9167
|
[
"bug",
"backend",
"frontend"
] |
kosi2801
| 1
|
saulpw/visidata
|
pandas
| 1,617
|
Packages very old
|
**Small description**
I'm not sure how or who packages this tool for Ubuntu, but the version is old enough that there isn't even a menu in the current deb provided via apt. This was pretty confusing since the docs start off showing a menu, and trying to activate or show it was the first order of business for someone like me that prefers this convenience. This might be an apt only thing, but I left this issue general so that other packages can be checked. I also considered this to be a bug rather than feature since the doc mismatch and time spent trying to show the menu is uncomfortable.
**Expected result**
**Actual result with screenshot**
```
$ sudo apt install visidata
...
Get:1 http://ca.archive.ubuntu.com/ubuntu jammy/universe amd64 visidata all 2.2.1-1 [157 kB]
```
**Steps to reproduce with sample data and a .vd**
NA
**Additional context**
```
$ vd --version
saul.pw/VisiData v2.2.1
```
|
closed
|
2022-12-04T18:43:09Z
|
2023-02-06T22:49:29Z
|
https://github.com/saulpw/visidata/issues/1617
|
[
"bug"
] |
PaluMacil
| 6
|
pallets-eco/flask-wtf
|
flask
| 72
|
WTForms 1.0.4 supports html5
|
There's no need to keep html5 compat module, WTForms supports it since v1.0.4 released 2013-04-29.
|
closed
|
2013-05-29T12:01:39Z
|
2021-05-30T01:24:42Z
|
https://github.com/pallets-eco/flask-wtf/issues/72
|
[] |
zgoda
| 0
|
babysor/MockingBird
|
pytorch
| 243
|
是否实验过m2_hat用linear频谱计算误差呢?
|
您好,有注意到用了postnet处理mel为linear,但是看训练过程,并没有用linear的频谱,而是用mel的频谱。
问下是否实验过m2_hat用linear频谱计算误差呢?
如果有实验过,能否谈一下结果如何呢?
下面贴上loss的计算:
# Backward pass
m1_loss = F.mse_loss(m1_hat, mels) + F.l1_loss(m1_hat, mels)
m2_loss = F.mse_loss(m2_hat, mels)
stop_loss = F.binary_cross_entropy(stop_pred, stop)
|
open
|
2021-11-29T12:45:14Z
|
2021-11-30T08:48:58Z
|
https://github.com/babysor/MockingBird/issues/243
|
[] |
ghost
| 4
|
matplotlib/cheatsheets
|
matplotlib
| 42
|
Cmap 'Blues' instead of 'Oranges'
|
There might be a typo in the tips & tricks handout where a `Blues` color map is called although an `Oranges` one is displayed:

For consistency, maybe we could use
```
cmap = plt.get_cmap("Oranges")
```
instead of
```
cmap = plt.get_cmap("Blues")
```
|
closed
|
2020-08-25T09:13:42Z
|
2021-08-24T07:12:54Z
|
https://github.com/matplotlib/cheatsheets/issues/42
|
[] |
pierrepo
| 2
|
gee-community/geemap
|
jupyter
| 1,831
|
135 segmentation - Exception during adding raster / shape file to Map
|
<!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
I use Google Colab (TPU)
### Description
Hello,
I tried to run step 135 of the tutorial - https://geemap.org/notebooks/135_segmentation on google colab.
The segmentation worked fine, but the steps of adding the results to the Map did not work:
```
Map.add_raster("annotations.tif", alpha=0.5, layer_name="Masks")
Map
```

```
style = {
"color": "#3388ff",
"weight": 2,
"fillColor": "#7c4185",
"fillOpacity": 0.5,
}
Map.add_vector('masks.shp', layer_name="Vector", style=style)
Map
```

### What I Did
I tried to run the following commands, both worked:
!pip install geemap[raster]
!pip install localtileserver
and then:
`import localtileserver`
and got the following exception:

|
closed
|
2023-11-13T21:03:56Z
|
2023-11-24T12:44:45Z
|
https://github.com/gee-community/geemap/issues/1831
|
[
"bug"
] |
FaranIdo
| 4
|
newpanjing/simpleui
|
django
| 11
|
自定义搜索功能不生效
|
我自定义一个过滤器,放在 list_filter 的列表中,要原生的 admin 下是可以出现过滤的效果,但在 simpleui中过滤数据不生效,我的代码如下:
`
model.py
class Price(models.Model):
name = models.CharField('名称', max_length=64, blank=True, null=True)
num = models.IntegerField('价格', blank=True, null=True)
def __str__(self):
return self.name
class Meta:
verbose_name = '价格'
verbose_name_plural = '价格'
admin.py
from django.contrib import admin
from goods import models
class PriceFilter(admin.SimpleListFilter):
"""
自定义价格过滤器
"""
title = '价格'
parameter_name = 'num'
def lookups(self, request, model_admin):
return (
('0', '高价'),
('1', '中价'),
('3', '低价'),
)
def queryset(self, request, queryset):
if self.value() == '0':
return queryset.filter(num__gt='555')
if self.value() == '1':
return queryset.filter(num__gte='222', num__lte='555')
if self.value() == '3':
return queryset.filter(num__lt='222')
class PriceAdmin(admin.ModelAdmin):
list_display = ['name', 'num']
ordering = ['-num']
list_filter = [PriceFilter,]
`
请查看,如果有什么不太清楚我,可以随时@我,谢谢!
|
closed
|
2019-03-08T07:38:51Z
|
2019-04-16T09:30:43Z
|
https://github.com/newpanjing/simpleui/issues/11
|
[] |
qylove516
| 2
|
tensorpack/tensorpack
|
tensorflow
| 656
|
Dynamic imports and IDE friendliness
|
Thank you for great framework, I have a question about editing tensorpack-based code.
I'm using PyCharm IDE, its code navigation and code auto-completion greatly simplifies working with a model code. These IDE features works flawlessly with a Tensorflow/Keras, but did'nt work at all with a tensorpack-based models, because tensorpack uses dynamic imports everywhere. Intellectual IDE just turns into dumb text editor, because it can't perform static analysis of tensorpack modules.
Do you have any ideas or plans how to to improve tensorpack IDE friendliness? May be some code generation to emit static import statements during library setup?
|
closed
|
2018-02-15T11:27:47Z
|
2018-05-30T20:59:36Z
|
https://github.com/tensorpack/tensorpack/issues/656
|
[
"enhancement"
] |
Arturus
| 13
|
slackapi/bolt-python
|
fastapi
| 1,169
|
Using workflow custom function on AWS lambda
|
I am trying to convert a a legacy workflow step to the new custom function in AWS lambda, but do not see any documentation on this.
The closest is https://api.slack.com/automation/functions/custom-bolt and https://api.slack.com/tutorials/tracks/bolt-custom-function, but both are using `SocketModeHandler` and have no mention of FaaS, like AWS lambda.
Is the feature compatible?
|
closed
|
2024-09-25T13:00:11Z
|
2024-09-29T07:11:32Z
|
https://github.com/slackapi/bolt-python/issues/1169
|
[
"question"
] |
ben-dov
| 1
|
svc-develop-team/so-vits-svc
|
deep-learning
| 125
|
[Help]: 在Google colab上运行,到推理这一步出现错误,ModuleNotFoundError: No module named 'torchcrepe'
|
### 请勾选下方的确认框。
- [X] 我已仔细阅读[README.md](https://github.com/svc-develop-team/so-vits-svc/blob/4.0/README_zh_CN.md)和[wiki中的Quick solution](https://github.com/svc-develop-team/so-vits-svc/wiki/Quick-solution)。
- [X] 我已通过各种搜索引擎排查问题,我要提出的问题并不常见。
- [X] 我未在使用由第三方用户提供的一键包/环境包。
### 系统平台版本号
win10
### GPU 型号
525.85.12
### Python版本
Python 3.9.16
### PyTorch版本
2.0.0+cu118
### sovits分支
4.0(默认)
### 数据集来源(用于判断数据集质量)
UVR处理过的vtb直播音频
### 出现问题的环节或执行的命令
推理
### 问题描述
在推理环节出现以下错误
Traceback (most recent call last):
File "/content/so-vits-svc/inference_main.py", line 11, in <module>
from inference import infer_tool
File "/content/so-vits-svc/inference/infer_tool.py", line 20, in <module>
import utils
File "/content/so-vits-svc/utils.py", line 20, in <module>
from modules.crepe import CrepePitchExtractor
File "/content/so-vits-svc/modules/crepe.py", line 8, in <module>
import torchcrepe
ModuleNotFoundError: No module named 'torchcrepe'
### 日志
```python
#@title 合成音频(推理)
#@markdown 需要将音频上传到so-vits-svc/raw 文件夹下, 然后设置模型路径、配置文件路径、合成的音频名称
!python inference_main.py -m "logs/44k/G_32000.pth" -c "configs/config.json" -n "live1.wav" -s qq
Traceback (most recent call last):
File "/content/so-vits-svc/inference_main.py", line 11, in <module>
from inference import infer_tool
File "/content/so-vits-svc/inference/infer_tool.py", line 20, in <module>
import utils
File "/content/so-vits-svc/utils.py", line 20, in <module>
from modules.crepe import CrepePitchExtractor
File "/content/so-vits-svc/modules/crepe.py", line 8, in <module>
import torchcrepe
ModuleNotFoundError: No module named 'torchcrepe'
```
### 截图`so-vits-svc`、`logs/44k`文件夹并粘贴到此处

### 补充说明
_No response_
|
closed
|
2023-04-05T14:49:38Z
|
2023-04-09T05:14:42Z
|
https://github.com/svc-develop-team/so-vits-svc/issues/125
|
[
"help wanted"
] |
Asgardloki233
| 1
|
marshmallow-code/apispec
|
rest-api
| 196
|
Documenting callable default
|
When a default/missing value is a callable, it is called when generating the documentation.
```python
if default is not marshmallow.missing:
if callable(default):
ret['default'] = default()
else:
ret['default'] = default
```
This can be wrong if the value of the callable depends on the moment it is called. Typically, if it is `dt.datetime.now`. There should be an opt-out for this allowing to specify an alternative text to write in the docs.
Not sure how to do that. Maybe a `default_doc` attribute in the Schema that would take precedence over default/missing.
|
closed
|
2018-04-05T07:51:52Z
|
2018-07-18T02:04:04Z
|
https://github.com/marshmallow-code/apispec/issues/196
|
[
"feedback welcome"
] |
lafrech
| 4
|
Lightning-AI/pytorch-lightning
|
deep-learning
| 20,058
|
Expose `weights_only` option for loading checkpoints
|
### Description & Motivation
PyTorch will default the `torch.load(weights_only=)` option to True in the future, and started emitting FutureWarning in 2.4. Since Lightning allows storing all kinds of data into the checkpoint (mainly through save_hyperparameters() and the on_save_checkpoint hooks), so far we have to explicitly set `weights_only=False` internally to be able to load checkpoints. But we could also expose this argument as a flag for the user to choose how strict they want to be when loading checkpoints.
### Pitch
In Fabric.load, expose weights_only.
In PyTorch Lightning, either add `fit(..., load_weights_only=...)` or add an attribute to the LightningModule (similar to how we handle strict_loading).
### Alternatives
_No response_
### Additional context
_No response_
cc @borda @awaelchli
|
open
|
2024-07-08T07:30:33Z
|
2024-07-12T15:42:13Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/20058
|
[
"feature",
"checkpointing"
] |
awaelchli
| 4
|
jonaswinkler/paperless-ng
|
django
| 1,447
|
[BUG] Consuming from smb share sometimes fails with "file not found" errors
|
<!---
=> Before opening an issue, please check the documentation and see if it helps you resolve your issue: https://paperless-ng.readthedocs.io/en/latest/troubleshooting.html
=> Please also make sure that you followed the installation instructions.
=> Please search the issues and look for similar issues before opening a bug report.
=> If you would like to submit a feature request please submit one under https://github.com/jonaswinkler/paperless-ng/discussions/categories/feature-requests
=> If you encounter issues while installing of configuring Paperless-ng, please post that in the "Support" section of the discussions. Remember that Paperless successfully runs on a variety of different systems. If paperless does not start, it's probably an issue with your system, and not an issue of paperless.
=> Don't remove the [BUG] prefix from the title.
-->
**Describe the bug**
I use an smb share as the consumption directory. The consumer always picks up new documents and starts processing them. Often, the task eventually fails with `[Errno 2] No such file or directory: '/usr/src/paperless/src/../consume/Scan19112021193259.pdf'`.
PDF files are copied to the share in one piece. I have set `PAPERLESS_CONSUMER_POLLING: 30`, so inotify is disabled.
**To Reproduce**
Steps to reproduce the behavior:
1. Set up paperless via docker-compose
2. Set up smb share as consumption directory
3. Add pdf file to consumption directory
4. Check the webserver log to see Error (in majority of cases)
**Expected behavior**
- Document is deleted from consumption directory
- Consumption task finishes without error
- Document is added to database
**Actual behavior**
- Document is deleted from consumption directory
- Consumption task finishes WITH error
- Document is NOT added to database
**Webserver logs**
```
[2021-11-19 19:33:18,697] [DEBUG] [paperless.management.consumer] Waiting for file /usr/src/paperless/src/../consume/Scan19112021193259.pdf to remain unmodified
[2021-11-19 19:33:23,703] [INFO] [paperless.management.consumer] Adding /usr/src/paperless/src/../consume/Scan19112021193259.pdf to the task queue.
[2021-11-19 19:33:23,881] [INFO] [paperless.consumer] Consuming Scan19112021193259.pdf
[2021-11-19 19:33:23,890] [DEBUG] [paperless.consumer] Detected mime type: application/pdf
[2021-11-19 19:33:23,905] [DEBUG] [paperless.consumer] Parser: RasterisedDocumentParser
[2021-11-19 19:33:23,914] [DEBUG] [paperless.consumer] Parsing Scan19112021193259.pdf...
[2021-11-19 19:33:24,895] [DEBUG] [paperless.parsing.tesseract] Extracted text from PDF file /usr/src/paperless/src/../consume/Scan19112021193259.pdf
[2021-11-19 19:33:25,117] [DEBUG] [paperless.parsing.tesseract] Calling OCRmyPDF with args: {'input_file': '/usr/src/paperless/src/../consume/Scan19112021193259.pdf', 'output_file': '/tmp/paperless/paperless-9bsrj6ku/archive.pdf', 'use_threads': True, 'jobs': 4, 'language': 'deu', 'output_type': 'pdfa', 'progress_bar': False, 'skip_text': True, 'clean': True, 'deskew': True, 'rotate_pages': True, 'rotate_pages_threshold': 12.0, 'sidecar': '/tmp/paperless/paperless-9bsrj6ku/sidecar.txt'}
[2021-11-19 19:33:31,516] [DEBUG] [paperless.parsing.tesseract] Incomplete sidecar file: discarding.
[2021-11-19 19:33:32,331] [DEBUG] [paperless.parsing.tesseract] Extracted text from PDF file /tmp/paperless/paperless-9bsrj6ku/archive.pdf
[2021-11-19 19:33:32,332] [DEBUG] [paperless.consumer] Generating thumbnail for Scan19112021193259.pdf...
[2021-11-19 19:33:32,346] [DEBUG] [paperless.parsing] Execute: convert -density 300 -scale 500x5000> -alpha remove -strip -auto-orient /tmp/paperless/paperless-9bsrj6ku/archive.pdf[0] /tmp/paperless/paperless-9bsrj6ku/convert.png
[2021-11-19 19:33:34,855] [DEBUG] [paperless.parsing.tesseract] Execute: optipng -silent -o5 /tmp/paperless/paperless-9bsrj6ku/convert.png -out /tmp/paperless/paperless-9bsrj6ku/thumb_optipng.png
[2021-11-19 19:33:40,278] [ERROR] [paperless.consumer] The following error occured while consuming Scan19112021193259.pdf: [Errno 2] No such file or directory: '/usr/src/paperless/src/../consume/Scan19112021193259.pdf'
Traceback (most recent call last):
File "/usr/src/paperless/src/documents/consumer.py", line 287, in try_consume_file
document = self._store(
File "/usr/src/paperless/src/documents/consumer.py", line 372, in _store
stats = os.stat(self.path)
FileNotFoundError: [Errno 2] No such file or directory: '/usr/src/paperless/src/../consume/Scan19112021193259.pdf'
[2021-11-19 19:33:40,280] [DEBUG] [paperless.parsing.tesseract] Deleting directory /tmp/paperless/paperless-9bsrj6ku
```
**Relevant information**
- Debian 10
- Firefox
- 1.5.0
- Installation method: docker-compose.postgres-tika
- Any configuration changes you made in `docker-compose.yml`, `docker-compose.env` or `paperless.conf`:
***docker-compose.yml***
```yaml
version: "3.4"
services:
broker:
image: redis:6.0
restart: unless-stopped
networks:
- internal
db:
image: postgres:13
restart: unless-stopped
volumes:
- pgdata:/var/lib/postgresql/data
environment:
POSTGRES_DB: paperless
POSTGRES_USER: paperless
POSTGRES_PASSWORD: <<secret>>
networks:
- internal
webserver:
image: jonaswinkler/paperless-ng:1.5.0
restart: unless-stopped
depends_on:
- db
- broker
- gotenberg
- tika
ports:
- 8083:8000
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000"]
interval: 30s
timeout: 10s
retries: 5
volumes:
- data:/usr/src/paperless/data
- media:/usr/src/paperless/media
- /mnt/smb/paperless/export:/usr/src/paperless/export
- /mnt/smb/paperless/inbox:/usr/src/paperless/consume
env_file: docker-compose.env
environment:
PAPERLESS_REDIS: redis://broker:6379
PAPERLESS_DBHOST: db
PAPERLESS_DBPASS: <<secret>>
PAPERLESS_ADMIN_USER: admin
PAPERLESS_ADMIN_PASSWORD: <<secret>>
PAPERLESS_TIKA_ENABLED: 1
PAPERLESS_TIKA_GOTENBERG_ENDPOINT: http://gotenberg:3000
PAPERLESS_TIKA_ENDPOINT: http://tika:9998
PAPERLESS_CONSUMER_POLLING: 30
labels:
- traefik.http.routers.paperless.rule=Host(`<<secret>>`)
- traefik.port=8000
networks:
- proxy
- internal
gotenberg:
image: thecodingmachine/gotenberg
restart: unless-stopped
environment:
DISABLE_GOOGLE_CHROME: 1
networks:
- internal
tika:
image: apache/tika
restart: unless-stopped
networks:
- internal
networks:
internal:
proxy:
external: true
volumes:
data:
driver: local
driver_opts:
type: 'none'
o: 'bind'
device: '/mnt/hdd-storage/docker/paperless/data'
media:
driver: local
driver_opts:
type: 'none'
o: 'bind'
device: '/mnt/hdd-storage/docker/paperless/media'
pgdata:
```
***docker-compose.env***
```env
PAPERLESS_SECRET_KEY=<<secret>>
PAPERLESS_TIME_ZONE=Europe/Berlin
PAPERLESS_OCR_LANGUAGE=deu
````
|
closed
|
2021-11-20T15:11:19Z
|
2022-02-06T19:56:43Z
|
https://github.com/jonaswinkler/paperless-ng/issues/1447
|
[] |
drgif
| 2
|
dmlc/gluon-nlp
|
numpy
| 751
|
CI reports linkcheck as passing for pull-requests even though it fails
|
@szha There is some bug in the CI setup for pull-requests. Even though the
linkcheck consistently failed for all pull-requests after #566, the CI did not
recognize the failure and reported the test as passing. Only on the master
branch, the CI correctly reports the test as failing.
See for example the log for #732:
- The pipeline checking the links is reported as passing
http://ci.mxnet.io/blue/organizations/jenkins/GluonNLP-py3-master-gpu-doc/detail/PR-732/32/pipeline/85
- However, looking at the detailed log files,
http://ci.mxnet.io/blue/rest/organizations/jenkins/pipelines/GluonNLP-py3-master-gpu-doc/branches/PR-732/runs/32/nodes/85/steps/106/log/?start=0
we see that the recipe for target linkcheck failed and reported Error 1 (last few lines)
|
closed
|
2019-06-07T07:15:24Z
|
2019-06-08T09:27:04Z
|
https://github.com/dmlc/gluon-nlp/issues/751
|
[
"bug",
"release focus"
] |
leezu
| 2
|
KaiyangZhou/deep-person-reid
|
computer-vision
| 105
|
About the accuracy of PCB model
|
Thanks for sharing. I am reproducing the PCB model, but I have tried a variety of parameter adjustment methods that can not achieve the accuracy of the original paper. My best rank1 result of pcb_p6 is 80.3%. I would like to ask what is the accuracy of your PCB model? Thanks.
|
closed
|
2019-01-20T03:33:02Z
|
2019-02-04T23:19:48Z
|
https://github.com/KaiyangZhou/deep-person-reid/issues/105
|
[] |
jiao133
| 2
|
iterative/dvc
|
data-science
| 9,743
|
`exp push`: https fails
|
# Bug Report
## Description
Unable to exp push to https git remote
### Reproduce
```console
$ git clone https://github.com/iterative/example-get-started-experiments-non-demo.git
$ cd example-get-started-experiments-non-demo
$ dvc exp run --pull
$ dvc exp push -v --no-cache origin
2023-07-18 11:18:28,971 DEBUG: v3.5.1 (pip), CPython 3.11.4 on macOS-13.4.1-arm64-arm-64bit
2023-07-18 11:18:28,971 DEBUG: command: /Users/dave/micromamba/envs/example-get-started-experiments/bin/dvc exp push -v --no-cache origin
2023-07-18 11:18:29,137 DEBUG: git push experiment ['refs/exps/49/f1dcff62757de0ca1ccba55e0b3b9b79fdce5d/natal-bots:refs/exps/49/f1dcff62757de0ca1ccba55e0b3b9b79fdce5d/natal-bots'] -> 'origin'
Pushing git refs| |0.00/? [00:00, ?obj/s]Username for 'https://github.com': dberenbaum
Password for 'https://dberenbaum@github.com':
2023-07-18 11:18:33,273 ERROR: unexpected error - HTTPSConnectionPool(host='github.com', port=443): Max retries exceeded with url: /iterative/example-get-started-experiments-non-demo.git/git-receive-pack (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:2423)')))
Traceback (most recent call last):
File "/Users/dave/micromamba/envs/example-get-started-experiments/lib/python3.11/site-packages/urllib3/connectionpool.py", line 714, in urlopen
httplib_response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "/Users/dave/micromamba/envs/example-get-started-experiments/lib/python3.11/site-packages/urllib3/connectionpool.py", line 415, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/Users/dave/micromamba/envs/example-get-started-experiments/lib/python3.11/site-packages/urllib3/connection.py", line 244, in request
super(HTTPConnection, self).request(method, url, body=body, headers=headers)
File "/Users/dave/micromamba/envs/example-get-started-experiments/lib/python3.11/http/client.py", line 1286, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/Users/dave/micromamba/envs/example-get-started-experiments/lib/python3.11/http/client.py", line 1332, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/Users/dave/micromamba/envs/example-get-started-experiments/lib/python3.11/http/client.py", line 1281, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/Users/dave/micromamba/envs/example-get-started-experiments/lib/python3.11/http/client.py", line 1080, in _send_output
self.send(chunk)
File "/Users/dave/micromamba/envs/example-get-started-experiments/lib/python3.11/http/client.py", line 1002, in send
self.sock.sendall(data)
File "/Users/dave/micromamba/envs/example-get-started-experiments/lib/python3.11/ssl.py", line 1241, in sendall
v = self.send(byte_view[count:])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dave/micromamba/envs/example-get-started-experiments/lib/python3.11/ssl.py", line 1210, in send
return self._sslobj.write(data)
^^^^^^^^^^^^^^^^^^^^^^^^
ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:2423)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/dave/micromamba/envs/example-get-started-experiments/lib/python3.11/site-packages/dvc/cli/__init__.py", line 209, in main
ret = cmd.do_run()
^^^^^^^^^^^^
File "/Users/dave/micromamba/envs/example-get-started-experiments/lib/python3.11/site-packages/dvc/cli/command.py", line 26, in do_run
return self.run()
^^^^^^^^^^
File "/Users/dave/micromamba/envs/example-get-started-experiments/lib/python3.11/site-packages/dvc/commands/experiments/push.py", line 55, in run
result = self.repo.experiments.push(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dave/micromamba/envs/example-get-started-experiments/lib/python3.11/site-packages/dvc/repo/experiments/__init__.py", line 389, in push
return push(self.repo, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dave/micromamba/envs/example-get-started-experiments/lib/python3.11/site-packages/dvc/repo/__init__.py", line 64, in wrapper
return f(repo, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dave/micromamba/envs/example-get-started-experiments/lib/python3.11/site-packages/dvc/repo/scm_context.py", line 151, in run
return method(repo, *args, **kw)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dave/micromamba/envs/example-get-started-experiments/lib/python3.11/site-packages/dvc/repo/experiments/push.py", line 120, in push
push_result = _push(repo, git_remote, exp_ref_set, force)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dave/micromamba/envs/example-get-started-experiments/lib/python3.11/site-packages/dvc/repo/experiments/push.py", line 162, in _push
results: Mapping[str, SyncStatus] = repo.scm.push_refspecs(
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dave/micromamba/envs/example-get-started-experiments/lib/python3.11/site-packages/scmrepo/git/__init__.py", line 286, in _backend_func
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/dave/micromamba/envs/example-get-started-experiments/lib/python3.11/site-packages/scmrepo/git/backend/dulwich/__init__.py", line 580, in push_refspecs
result = client.send_pack(
^^^^^^^^^^^^^^^^^
File "/Users/dave/micromamba/envs/example-get-started-experiments/lib/python3.11/site-packages/dulwich/client.py", line 2048, in send_pack
resp, read = self._smart_request(
^^^^^^^^^^^^^^^^^^^^
File "/Users/dave/micromamba/envs/example-get-started-experiments/lib/python3.11/site-packages/dulwich/client.py", line 1988, in _smart_request
resp, read = self._http_request(url, headers, data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dave/micromamba/envs/example-get-started-experiments/lib/python3.11/site-packages/scmrepo/git/backend/dulwich/client.py", line 49, in _http_request
result = super()._http_request(
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dave/micromamba/envs/example-get-started-experiments/lib/python3.11/site-packages/dulwich/client.py", line 2211, in _http_request
resp = self.pool_manager.request(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dave/micromamba/envs/example-get-started-experiments/lib/python3.11/site-packages/urllib3/request.py", line 78, in request
return self.request_encode_body(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dave/micromamba/envs/example-get-started-experiments/lib/python3.11/site-packages/urllib3/request.py", line 170, in request_encode_body
return self.urlopen(method, url, **extra_kw)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dave/micromamba/envs/example-get-started-experiments/lib/python3.11/site-packages/urllib3/poolmanager.py", line 376, in urlopen
response = conn.urlopen(method, u.request_uri, **kw)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dave/micromamba/envs/example-get-started-experiments/lib/python3.11/site-packages/urllib3/connectionpool.py", line 826, in urlopen
return self.urlopen(
^^^^^^^^^^^^^
File "/Users/dave/micromamba/envs/example-get-started-experiments/lib/python3.11/site-packages/urllib3/connectionpool.py", line 826, in urlopen
return self.urlopen(
^^^^^^^^^^^^^
File "/Users/dave/micromamba/envs/example-get-started-experiments/lib/python3.11/site-packages/urllib3/connectionpool.py", line 826, in urlopen
return self.urlopen(
^^^^^^^^^^^^^
File "/Users/dave/micromamba/envs/example-get-started-experiments/lib/python3.11/site-packages/urllib3/connectionpool.py", line 798, in urlopen
retries = retries.increment(
^^^^^^^^^^^^^^^^^^
File "/Users/dave/micromamba/envs/example-get-started-experiments/lib/python3.11/site-packages/urllib3/util/retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='github.com', port=443): Max retries exceeded with url: /iterative/example-get-started-experiments-non-demo.git/git-receive-pack (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:2423)')))
2023-07-18 11:18:33,303 DEBUG: Removing '/private/tmp/.SNv8fX8x3vBLJGm7CyVXFZ.tmp'
2023-07-18 11:18:33,304 DEBUG: Removing '/private/tmp/.SNv8fX8x3vBLJGm7CyVXFZ.tmp'
2023-07-18 11:18:33,304 DEBUG: Removing '/private/tmp/.SNv8fX8x3vBLJGm7CyVXFZ.tmp'
2023-07-18 11:18:33,304 DEBUG: Removing '/private/tmp/example-get-started-experiments-non-demo/.dvc/cache/files/md5/.bFc97uwWMMPqFiP9gJcwnu.tmp'
2023-07-18 11:18:37,342 DEBUG: Version info for developers:
DVC version: 3.5.1 (pip)
------------------------
Platform: Python 3.11.4 on macOS-13.4.1-arm64-arm-64bit
Subprojects:
dvc_data = 2.5.0
dvc_objects = 0.23.0
dvc_render = 0.5.3
dvc_task = 0.3.0
scmrepo = 1.0.4
Supports:
http (aiohttp = 3.8.4, aiohttp-retry = 2.8.3),
https (aiohttp = 3.8.4, aiohttp-retry = 2.8.3),
s3 (s3fs = 2023.6.0, boto3 = 1.26.161)
Config:
Global: /Users/dave/Library/Application Support/dvc
System: /Library/Application Support/dvc
Cache types: reflink, hardlink, symlink
Cache directory: apfs on /dev/disk3s1s1
Caches: local
Remotes: https
Workspace directory: apfs on /dev/disk3s1s1
Repo: dvc, git
Repo.site_cache_dir: /Library/Caches/dvc/repo/b1fc49d4408cca5f1e780cdbe0783d09
Having any troubles? Hit us up at https://dvc.org/support, we are always happy to help!
2023-07-18 11:18:37,346 DEBUG: Analytics is disabled.
```
|
closed
|
2023-07-18T15:21:47Z
|
2023-07-19T13:29:08Z
|
https://github.com/iterative/dvc/issues/9743
|
[
"bug",
"p2-medium",
"A: experiments",
"git"
] |
dberenbaum
| 4
|
jupyterlab/jupyter-ai
|
jupyter
| 930
|
`default.faiss` not checked for existance before using
|
jupyter-ai seems not to check if `default.faiss` exists before using it through `faiss`
```
[E 2024-08-03 17:49:43.118 AiExtension] Could not load vector index from disk. Full exception details printed below.
[E 2024-08-03 17:49:43.118 AiExtension] Error in faiss::FileIOReader::FileIOReader(const char*) at /project/faiss/faiss/impl/io.cpp:67: Error: 'f' failed: could not open /p/home/user1/cluster1/.local/share/jupyter/jupyter_ai/indices/default.faiss for reading: No such file or directory
```
and it also does not regenerate it if the file is missing.
(I am using jupyter-ai 2.19.1 in combination with JupyterLab 4.2.1)
|
open
|
2024-08-03T16:01:08Z
|
2025-01-21T22:14:22Z
|
https://github.com/jupyterlab/jupyter-ai/issues/930
|
[
"bug",
"good first issue"
] |
jhgoebbert
| 0
|
axnsan12/drf-yasg
|
django
| 530
|
How can a add "Injection security definitions" on my redoc?
|
Hi experts,
How can a add "Injection security definitions" on my redoc?
I didn't understand the link: https://github.com/Redocly/redoc/blob/master/docs/security-definitions-injection.md#injection-security-definitions
I want to add a few menus itens before the menu item Authentication like below:

|
closed
|
2020-01-16T19:27:35Z
|
2020-01-16T21:00:25Z
|
https://github.com/axnsan12/drf-yasg/issues/530
|
[] |
danilocastelhano1
| 1
|
mljar/mljar-supervised
|
scikit-learn
| 356
|
Add Neural Network in Optuna
|
closed
|
2021-03-29T06:51:01Z
|
2021-03-29T07:01:04Z
|
https://github.com/mljar/mljar-supervised/issues/356
|
[
"enhancement"
] |
pplonski
| 0
|
|
prisma-labs/python-graphql-client
|
graphql
| 6
|
Will update soon on PyPI
|
Hi everyone,
Sorry I've been busy. I'll update this project soon on PyPI
|
closed
|
2018-05-14T18:43:40Z
|
2018-05-25T10:04:20Z
|
https://github.com/prisma-labs/python-graphql-client/issues/6
|
[] |
ssshah86
| 1
|
littlecodersh/ItChat
|
api
| 872
|
现在微信禁止网页版登录,这个包用不了了,求解答
|
open
|
2019-09-16T01:59:03Z
|
2022-03-04T08:06:58Z
|
https://github.com/littlecodersh/ItChat/issues/872
|
[] |
kuangshp
| 17
|
|
mitmproxy/mitmproxy
|
python
| 7,034
|
Is support x86 systems?
|
#### Problem Description
I installed the python 312 x86 environment and compiled mitmproxy, and the following message appears

mitmproxy Is support x86 systems?
#### Steps to reproduce the behavior:
1. python -m venv venv
2. pip install -e .
3. build error
|
closed
|
2024-07-23T08:28:48Z
|
2024-07-23T10:15:44Z
|
https://github.com/mitmproxy/mitmproxy/issues/7034
|
[
"kind/triage"
] |
linuxlin365
| 2
|
InstaPy/InstaPy
|
automation
| 6,012
|
Confuse with desired amount in session.like_by_tags
|
What i doing wrong? i set amount in like_by_tags with random, just for test 1,1 so the number will be one. But when it running always got 10 :S
## Expected Behavior
desired amount: 1
## Current Behavior
desired amount: 10
## Possible Solution (optional)
## InstaPy configuration
### hastagból bekövetett user hány postjával lépjen interraktba?
session.set_user_interact(amount=2,
randomize=True,
percentage=100,
media='Photo')
## Minden postot likeolunk
session.set_do_like(enabled=False, percentage=100)
## Mindent bekövetünk
session.set_do_follow(enabled=True, percentage=100, times=1)
## Minden 4. kommentelünk, 20 komment alatt.
session.set_do_comment(enabled=True, percentage=25)
session.set_delimit_commenting(enabled=False, max_comments=20, min_comments=None)
session.set_comments(comments,media='Photo')
### Random hastagek
random.shuffle(tri_hashtags)
random_tri_hashtags = tri_hashtags[:4]
## TODO - Itt indulunk
session.like_by_tags(random_tri_hashtags, amount=random.randint(1, 1), interact=True, skip_top_posts=True, media='Photo')
<img width="1297" alt="Screenshot 2021-01-07 at 21 15 27" src="https://user-images.githubusercontent.com/12101834/103940661-faeac480-512d-11eb-9a36-bbe2fc7526f5.png">
|
open
|
2021-01-07T20:21:54Z
|
2021-07-22T21:33:47Z
|
https://github.com/InstaPy/InstaPy/issues/6012
|
[] |
noobi97
| 4
|
Gozargah/Marzban
|
api
| 689
|
Node distribution
|
We need to add the ability to select nodes for users individually. So that some users are on node number one, and some users are on node number 2
|
closed
|
2023-12-10T09:16:16Z
|
2024-11-04T18:32:20Z
|
https://github.com/Gozargah/Marzban/issues/689
|
[
"Feature"
] |
Genry777Morgan
| 7
|
yt-dlp/yt-dlp
|
python
| 12,026
|
[RFE] Supported Site Request - Means TV aka means.tv
|
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a new site support request
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
### Region
Worldwide I think? Definitely in the USA
### Example URLs
- Collection: https://means.tv/programs/mmn
- Single video: https://means.tv/programs/mmn?cid=4003569&permalink=mmn-daily_122024
- Single video: https://means.tv/programs/mmn-daily_122024
### Provide a description that is worded well enough to be understood
This site is for anti-capitalist content. The example URLs all contain free downloads (the collection contains a mix of free and subscription), I checked the supported sites list and did not find it. I tried running with both the collection and the first single video and the generic extractor was unable to process either request.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['--no-call-home', '--add-metadata', '--cookies', 'cookie-jar-file', '--embed-metadata', '--embed-thumbnail', '-v', 'https://means.tv/programs/mmn?cid=4003569&permalink=mmn-daily_122024']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.12.23 from yt-dlp/yt-dlp [65cf46cdd] (source)
[debug] Lazy loading extractors is disabled
[debug] Git HEAD: 0b6b7742c
[debug] Python 3.10.12 (CPython x86_64 64bit) - Linux-5.15.0-128-generic-x86_64-with-glibc2.35 (OpenSSL 3.0.2 15 Mar 2022, glibc 2.35)
[debug] exe versions: ffmpeg N-113348-g0a5813fc68-20240119 (setts), ffprobe N-113348-g0a5813fc68-20240119
[debug] Optional libraries: Cryptodome-3.17, brotli-1.0.9, certifi-2024.02.02, mutagen-1.47.0, requests-2.32.3, secretstorage-3.3.1, sqlite3-3.37.2, urllib3-2.1.0, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1837 extractors
[generic] Extracting URL: https://means.tv/programs/mmn?cid=4003569&permalink=mmn-daily_122024
[generic] mmn?cid=4003569&permalink=mmn-daily_122024: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] mmn?cid=4003569&permalink=mmn-daily_122024: Extracting information
[debug] Looking for embeds
[debug] Identified a JSON LD
[generic] Extracting URL: https://means.tv/programs/mmn-daily_122024#__youtubedl_smuggle=%7B%22force_videoid%22%3A+%22mmn%3Fcid%3D4003569%26permalink%3Dmmn-daily_122024%22%2C+%22to_generic%22%3A+true%2C+%22referer%22%3A+%22https%3A%2F%2Fmeans.tv%2Fprograms%2Fmmn%3Fcid%3D4003569%26permalink%3Dmmn-daily_122024%22%7D
[generic] mmn?cid=4003569&permalink=mmn-daily_122024: Downloading webpage
[generic] mmn?cid=4003569&permalink=mmn-daily_122024: Extracting information
[debug] Looking for embeds
ERROR: Unsupported URL: https://means.tv/programs/mmn-daily_122024
Traceback (most recent call last):
File "/home/h/dev/yt-dlp/yt_dlp/YoutubeDL.py", line 1634, in wrapper
return func(self, *args, **kwargs)
File "/home/h/dev/yt-dlp/yt_dlp/YoutubeDL.py", line 1769, in __extract_info
ie_result = ie.extract(url)
File "/home/h/dev/yt-dlp/yt_dlp/extractor/common.py", line 742, in extract
ie_result = self._real_extract(url)
File "/home/h/dev/yt-dlp/yt_dlp/extractor/generic.py", line 2553, in _real_extract
raise UnsupportedError(url)
yt_dlp.utils.UnsupportedError: Unsupported URL: https://means.tv/programs/mmn-daily_122024
```
|
open
|
2025-01-08T03:16:32Z
|
2025-01-25T01:14:38Z
|
https://github.com/yt-dlp/yt-dlp/issues/12026
|
[
"site-request",
"triage"
] |
hibes
| 12
|
thtrieu/darkflow
|
tensorflow
| 846
|
There is an Error!
|
Parsing ./cfg/yolo.cfg
Parsing cfg/yolo.cfg
Loading bin/yolo.weights ...
Traceback (most recent call last):
File "E:\Users\ZP\Desktop\Getdata\flow.py", line 6, in <module>
cliHandler(sys.argv)
File "D:\Program Files\Python36\lib\site-packages\darkflow\cli.py", line 26, in cliHandler
tfnet = TFNet(FLAGS)
File "D:\Program Files\Python36\lib\site-packages\darkflow\net\build.py", line 58, in __init__
darknet = Darknet(FLAGS)
File "D:\Program Files\Python36\lib\site-packages\darkflow\dark\darknet.py", line 27, in __init__
self.load_weights()
File "D:\Program Files\Python36\lib\site-packages\darkflow\dark\darknet.py", line 82, in load_weights
wgts_loader = loader.create_loader(*args)
File "D:\Program Files\Python36\lib\site-packages\darkflow\utils\loader.py", line 105, in create_loader
return load_type(path, cfg)
File "D:\Program Files\Python36\lib\site-packages\darkflow\utils\loader.py", line 19, in __init__
self.load(*args)
File "D:\Program Files\Python36\lib\site-packages\darkflow\utils\loader.py", line 77, in load
walker.offset, walker.size)
AssertionError: expect 203934260 bytes, found 248007048
|
open
|
2018-07-18T10:06:04Z
|
2018-07-18T10:06:04Z
|
https://github.com/thtrieu/darkflow/issues/846
|
[] |
JoffreyN
| 0
|
dask/dask
|
numpy
| 11,382
|
when using max/min as first expression for new collumn dataframe will not compute
|
**Describe the issue**:
in the code example below the dataframe will fail to compute after we have calculated cell_y collumn, I have determined that the cause is the arrangement of the expression, rearranging it can remove the error, this seems like a bug.
**Minimal Complete Verifiable Example**:
```python
import dask
import dask.dataframe as dd
import pandas as pd
from importlib.metadata import version
print(version("dask"))
print(version("pandas"))
data = pd.DataFrame(
{
"x": [0, 1, 0, 1],
"y": [0, 0, 1, 1],
"z": [1, 2, 3, 4],
"beam_flag": [5, 0, 0, 0],
"ping_number": [0] * 4,
"beam_number": [0] * 4,
"filename": [0] * 4,
}
)
ddf = dd.from_pandas(data, npartitions=1)
ddf["cell_x"] = ((ddf.x - ddf.x.min()) // 1).astype("uint32")
ddf["cell_y"] = ((ddf.y.max() - ddf.y) // 1).astype("uint32")
print(ddf.compute())
```
**Anything else we need to know?**:
I got around the error by rearranging the expression, the code below does not error.
The original code worked in older dasks.
```python
import dask
import dask.dataframe as dd
import pandas as pd
from importlib.metadata import version
print(version("dask"))
print(version("pandas"))
data = pd.DataFrame(
{
"x": [0, 1, 0, 1],
"y": [0, 0, 1, 1],
"z": [1, 2, 3, 4],
"beam_flag": [5, 0, 0, 0],
"ping_number": [0] * 4,
"beam_number": [0] * 4,
"filename": [0] * 4,
}
)
ddf = dd.from_pandas(data, npartitions=1)
ddf["cell_x"] = ((ddf.x - ddf.x.min()) // 1).astype("uint32")
ddf["cell_y"] = ((-ddf.y + ddf.y.max()) // 1).astype("uint32")
print(ddf.compute())
```
**Environment**:
- Dask version: 2024.8.2
- Pandas: 2.2.2
- Python version: 3.10
- Operating System: linux
- Install method (poetry):
|
closed
|
2024-09-10T12:32:19Z
|
2024-10-11T16:02:59Z
|
https://github.com/dask/dask/issues/11382
|
[
"dask-expr"
] |
JimHBeam
| 1
|
microsoft/unilm
|
nlp
| 922
|
Can Magneto train a 1000 layer network and how does it compare to Deepnet?
|
Looking forward to the reply. Thanks. ^^ @shumingma
|
closed
|
2022-11-18T09:41:01Z
|
2022-11-24T04:05:05Z
|
https://github.com/microsoft/unilm/issues/922
|
[] |
nbcc
| 5
|
PokeAPI/pokeapi
|
api
| 813
|
Inconsistency between pokemon_forms and pokemon/pokemon_stats ids
|
closed
|
2023-01-10T21:57:41Z
|
2023-01-12T01:24:49Z
|
https://github.com/PokeAPI/pokeapi/issues/813
|
[] |
tillfox
| 0
|
|
modelscope/modelscope
|
nlp
| 1,194
|
modelscope download下载数据集的默认路径变了?
|
不应该是.cache/modelscope/datasets下吗,现在为什么变成.cache/modelscope/hub/datasets下了 @wangxingjun778
|
closed
|
2025-01-17T16:36:45Z
|
2025-01-19T06:32:05Z
|
https://github.com/modelscope/modelscope/issues/1194
|
[] |
acdart
| 7
|
freqtrade/freqtrade
|
python
| 10,654
|
Error during backtesting QuickAdapterV3 KeyError: '&s-extrema'
|
<!--
Have you searched for similar issues before posting it?
Yes and I found that changing model id helps freqtrade to not get stuck in old models but still even if I delete the old models I get this error. The issue I found https://github.com/freqtrade/freqtrade/issues/9379 .
If you have discovered a bug in the bot, please [search the issue tracker](https://github.com/freqtrade/freqtrade/issues?q=is%3Aissue).
If it hasn't been reported, please create a new issue.
Please do not use bug reports to request new features.
-->
## Describe your environment
* Operating system: Windows 11 with Docker image freqtradeorg/freqtrade:stable_freqaitorch
* Python Version: 3.10 (`python -V`)
* CCXT version: 4.3.65 (`pip freeze | grep ccxt`)
* Freqtrade Version: freqtrade docker-2024.9-dev-704e32b0 (`freqtrade -V` or `docker compose run --rm freqtrade -V` for Freqtrade running in docker)
Note: All issues other than enhancement requests will be closed without further comment if the above template is deleted or not filled out.
## Describe the problem:
*Explain the problem you have encountered*
Hello, first of all, thanks for all your amazing work and support! I’ve just started working with FreqAI and I’m still trying to grasp everything.
While running backtesting, I either encounter an error, or the backtesting finishes, but no trades are made. I made small modifications to the config file to reduce the amount of data processed during training, but nothing else. I also tried backtesting without the confirm_trade_entry function because it calls Trade.get_trades(), which is not supported in backtesting. I haven’t touched the XGBRegressor file, even though the error seems to originate from there.
### Steps to reproduce:
1. Backtest the strategy
### Observed Results:
I either get the error or no trades are made
### Relevant code exceptions or logs
The logs
```
docker compose -f docker-compose-ai.yml run --rm freqtrade backtesting --config user_data/config_ai.json --strategy QuickAdapterV3 --freqaimodel XGBoostRegressorQuickAdapterV3 --timerange 20240501-20240510
2024-09-15 16:46:17,738 - freqtrade - INFO - freqtrade 2024.7.1
2024-09-15 16:46:17,763 - freqtrade.configuration.load_config - INFO - Using config: user_data/config_ai.json ...
2024-09-15 16:46:17,770 - freqtrade.loggers - INFO - Verbosity set to 0
2024-09-15 16:46:17,770 - freqtrade.configuration.configuration - WARNING - `force_entry_enable` RPC message enabled.
2024-09-15 16:46:17,771 - freqtrade.configuration.configuration - INFO - Using max_open_trades: 3 ...
2024-09-15 16:46:17,771 - freqtrade.configuration.configuration - INFO - Parameter --timerange detected: 20240501-20240510 ...
2024-09-15 16:46:18,149 - freqtrade.configuration.configuration - INFO - Using user-data directory: /freqtrade/user_data ...
2024-09-15 16:46:18,153 - freqtrade.configuration.configuration - INFO - Using data directory: /freqtrade/user_data/data/binance ...
2024-09-15 16:46:18,153 - freqtrade.configuration.configuration - INFO - Parameter --cache=day detected ...
2024-09-15 16:46:18,153 - freqtrade.configuration.configuration - INFO - Filter trades by timerange: 20240501-20240510
2024-09-15 16:46:18,153 - freqtrade.configuration.configuration - INFO - Using freqaimodel class name: XGBoostRegressorQuickAdapterV3
2024-09-15 16:46:18,155 - freqtrade.exchange.check_exchange - INFO - Checking exchange...
2024-09-15 16:46:18,171 - freqtrade.exchange.check_exchange - INFO - Exchange "binance" is officially supported by the Freqtrade development team.
2024-09-15 16:46:18,171 - freqtrade.configuration.configuration - INFO - Using pairlist from configuration.
2024-09-15 16:46:18,171 - freqtrade.configuration.config_validation - INFO - Validating configuration ...
2024-09-15 16:46:18,176 - freqtrade.commands.optimize_commands - INFO - Starting freqtrade in Backtesting mode
2024-09-15 16:46:18,176 - freqtrade.exchange.exchange - INFO - Instance is running with dry_run enabled
2024-09-15 16:46:18,176 - freqtrade.exchange.exchange - INFO - Using CCXT 4.3.65
2024-09-15 16:46:18,177 - freqtrade.exchange.exchange - INFO - Applying additional ccxt config: {'options': {'defaultType': 'swap'}}
2024-09-15 16:46:18,190 - freqtrade.exchange.exchange - INFO - Applying additional ccxt config: {'options': {'defaultType': 'swap'}}
2024-09-15 16:46:18,206 - freqtrade.exchange.exchange - INFO - Using Exchange "Binance"
2024-09-15 16:46:21,060 - freqtrade.resolvers.exchange_resolver - INFO - Using resolved exchange 'Binance'...
2024-09-15 16:46:21,676 - freqtrade.resolvers.iresolver - WARNING - Could not import /freqtrade/user_data/strategies/GodStra.py due to 'No module named 'ta''
2024-09-15 16:46:21,696 - freqtrade.resolvers.iresolver - WARNING - Could not import /freqtrade/user_data/strategies/Heracles.py due to 'No module named 'ta''
2024-09-15 16:46:21,771 - freqtrade.resolvers.iresolver - WARNING - Could not import /freqtrade/user_data/strategies/MacheteV8b_Fixed.py due to 'No module named 'finta''
2024-09-15 16:46:22,326 - freqtrade.resolvers.iresolver - INFO - Using resolved strategy QuickAdapterV3 from '/freqtrade/user_data/strategies/QuickAdapterV3.py'...
2024-09-15 16:46:22,328 - freqtrade.strategy.hyper - INFO - Found no parameter file.
2024-09-15 16:46:22,329 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'timeframe' with value in config file: 3m.
2024-09-15 16:46:22,329 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'stake_currency' with value in config file: USDT.
2024-09-15 16:46:22,329 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'stake_amount' with value in config file: unlimited.
2024-09-15 16:46:22,329 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'unfilledtimeout' with value in config file: {'entry': 10, 'exit': 30, 'exit_timeout_count': 0, 'unit': 'minutes'}.
2024-09-15 16:46:22,329 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'max_open_trades' with value in config file: 3.
2024-09-15 16:46:22,329 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using minimal_roi: {'0': 0.03, '5000': -1}
2024-09-15 16:46:22,330 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using timeframe: 3m
2024-09-15 16:46:22,330 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using stoploss: -0.04
2024-09-15 16:46:22,330 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using trailing_stop: False
2024-09-15 16:46:22,330 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using trailing_stop_positive_offset: 0.0
2024-09-15 16:46:22,330 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using trailing_only_offset_is_reached: False
2024-09-15 16:46:22,330 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using use_custom_stoploss: False
2024-09-15 16:46:22,330 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using process_only_new_candles: True
2024-09-15 16:46:22,330 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using order_types: {'entry': 'limit', 'exit': 'market', 'emergency_exit': 'market', 'force_exit': 'market', 'force_entry': 'market', 'stoploss': 'market', 'stoploss_on_exchange': False, 'stoploss_on_exchange_interval': 120}
2024-09-15 16:46:22,330 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using order_time_in_force: {'entry': 'GTC', 'exit': 'GTC'}
2024-09-15 16:46:22,331 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using stake_currency: USDT
2024-09-15 16:46:22,331 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using stake_amount: unlimited
2024-09-15 16:46:22,331 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using protections: [{'method': 'CooldownPeriod', 'stop_duration_candles': 4}, {'method': 'MaxDrawdown', 'lookback_period_candles': 48, 'trade_limit': 20, 'stop_duration_candles': 4, 'max_allowed_drawdown': 0.2}]
2024-09-15 16:46:22,331 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using startup_candle_count: 80
2024-09-15 16:46:22,331 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using unfilledtimeout: {'entry': 10, 'exit': 30, 'exit_timeout_count': 0, 'unit': 'minutes'}
2024-09-15 16:46:22,331 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using use_exit_signal: True
2024-09-15 16:46:22,331 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using exit_profit_only: False
2024-09-15 16:46:22,331 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using ignore_roi_if_entry_signal: False
2024-09-15 16:46:22,331 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using exit_profit_offset: 0.0
2024-09-15 16:46:22,331 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using disable_dataframe_checks: False
2024-09-15 16:46:22,332 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using ignore_buying_expired_candle_after: 0
2024-09-15 16:46:22,332 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using position_adjustment_enable: False
2024-09-15 16:46:22,332 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using max_entry_position_adjustment: 1
2024-09-15 16:46:22,332 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using max_open_trades: 3
2024-09-15 16:46:22,332 - freqtrade.configuration.config_validation - INFO - Validating configuration ...
2024-09-15 16:46:22,370 - freqtrade.resolvers.iresolver - INFO - Using resolved pairlist StaticPairList from '/freqtrade/freqtrade/plugins/pairlist/StaticPairList.py'...
2024-09-15 16:46:22,389 - freqtrade.optimize.backtesting - INFO - Using fee 0.0500% - worst case fee from exchange (lowest tier).
2024-09-15 16:46:22,390 - freqtrade.data.dataprovider - INFO - Increasing startup_candle_count for freqai on 3m to 6800
2024-09-15 16:46:22,410 - freqtrade.data.history.history_utils - INFO - Using indicator startup period: 6800 ...
2024-09-15 16:46:23,336 - freqtrade.optimize.backtesting - INFO - Loading data from 2024-04-16 20:00:00 up to 2024-05-10 00:00:00 (23 days).
2024-09-15 16:46:23,650 - freqtrade.optimize.backtesting - INFO - Dataload complete. Calculating indicators
2024-09-15 16:46:23,705 - freqtrade.optimize.backtesting - INFO - Running backtesting for Strategy QuickAdapterV3
2024-09-15 16:46:23,968 - freqtrade.resolvers.iresolver - INFO - Using resolved freqaimodel XGBoostRegressorQuickAdapterV3 from '/freqtrade/user_data/freqaimodels/XGBoostRegressorQuickAdapterV3.py'...
2024-09-15 16:46:23,973 - freqtrade.freqai.freqai_interface - INFO - Backtesting module configured to save all models.
2024-09-15 16:46:23,985 - freqtrade.freqai.data_drawer - INFO - Could not find existing historic_predictions, starting from scratch
2024-09-15 16:46:23,988 - freqtrade.freqai.data_drawer - INFO - Could not find existing metric tracker, starting from scratch
2024-09-15 16:46:23,988 - freqtrade.freqai.freqai_interface - INFO - Set existing queue from trained timestamps. Best approximation queue: {best_queue}
2024-09-15 16:46:24,010 - freqtrade.strategy.hyper - INFO - No params for buy found, using default values.
2024-09-15 16:46:24,011 - freqtrade.strategy.hyper - INFO - No params for sell found, using default values.
2024-09-15 16:46:24,011 - freqtrade.strategy.hyper - INFO - No params for protection found, using default values.
2024-09-15 16:46:24,020 - freqtrade.freqai.freqai_interface - INFO - Training 5 timeranges
2024-09-15 16:46:24,023 - freqtrade.freqai.freqai_interface - INFO - Training BTC/USDT:USDT, 1/4 pairs from 2024-04-17 00:00:00 to 2024-05-01 00:00:00, 1/5 trains
2024-09-15 16:46:24,043 - freqtrade.freqai.data_kitchen - INFO - Found backtesting prediction file at /freqtrade/user_data/models/quickadapter-xgboost/backtesting_predictions/cb_btc_1714521600_prediction.feather
2024-09-15 16:46:24,161 - freqtrade.data.dataprovider - INFO - Increasing startup_candle_count for freqai on 5m to 4112
2024-09-15 16:46:24,161 - freqtrade.data.dataprovider - INFO - Loading data for BTC/USDT:USDT 5m from 2024-04-16 17:20:00 to 2024-05-10 00:00:00
2024-09-15 16:46:24,918 - freqtrade.data.dataprovider - INFO - Increasing startup_candle_count for freqai on 15m to 1424
2024-09-15 16:46:24,918 - freqtrade.data.dataprovider - INFO - Loading data for BTC/USDT:USDT 15m from 2024-04-16 04:00:00 to 2024-05-10 00:00:00
2024-09-15 16:46:25,235 - freqtrade.data.dataprovider - INFO - Increasing startup_candle_count for freqai on 1h to 416
2024-09-15 16:46:25,235 - freqtrade.data.dataprovider - INFO - Loading data for BTC/USDT:USDT 1h from 2024-04-13 16:00:00 to 2024-05-10 00:00:00
2024-09-15 16:46:25,418 - freqtrade.freqai.freqai_interface - INFO - Training BTC/USDT:USDT, 1/4 pairs from 2024-04-19 00:00:00 to 2024-05-03 00:00:00, 2/5 trains
2024-09-15 16:46:25,437 - freqtrade.freqai.data_kitchen - INFO - Found backtesting prediction file at /freqtrade/user_data/models/quickadapter-xgboost/backtesting_predictions/cb_btc_1714694400_prediction.feather
2024-09-15 16:46:25,453 - freqtrade.freqai.freqai_interface - INFO - Training BTC/USDT:USDT, 1/4 pairs from 2024-04-21 00:00:00 to 2024-05-05 00:00:00, 3/5 trains
2024-09-15 16:46:25,470 - freqtrade.freqai.data_kitchen - INFO - Found backtesting prediction file at /freqtrade/user_data/models/quickadapter-xgboost/backtesting_predictions/cb_btc_1714867200_prediction.feather
2024-09-15 16:46:25,485 - freqtrade.freqai.freqai_interface - INFO - Training BTC/USDT:USDT, 1/4 pairs from 2024-04-23 00:00:00 to 2024-05-07 00:00:00, 4/5 trains
2024-09-15 16:46:25,501 - freqtrade.freqai.data_kitchen - INFO - Found backtesting prediction file at /freqtrade/user_data/models/quickadapter-xgboost/backtesting_predictions/cb_btc_1715040000_prediction.feather
2024-09-15 16:46:25,516 - freqtrade.freqai.freqai_interface - INFO - Training BTC/USDT:USDT, 1/4 pairs from 2024-04-25 00:00:00 to 2024-05-09 00:00:00, 5/5 trains
2024-09-15 16:46:25,533 - freqtrade.freqai.data_kitchen - INFO - Found backtesting prediction file at /freqtrade/user_data/models/quickadapter-xgboost/backtesting_predictions/cb_btc_1715212800_prediction.feather
2024-09-15 16:46:25,544 - freqtrade.freqai.freqai_interface - INFO - Applying fit_live_predictions in backtesting
2024-09-15 16:46:25,607 - freqtrade - ERROR - Fatal exception!
Traceback (most recent call last):
File "/freqtrade/freqtrade/main.py", line 43, in main
return_code = args["func"](args)
^^^^^^^^^^^^^^^^^^
File "/freqtrade/freqtrade/commands/optimize_commands.py", line 60, in start_backtesting
backtesting.start()
File "/freqtrade/freqtrade/optimize/backtesting.py", line 1614, in start
min_date, max_date = self.backtest_one_strategy(strat, data, timerange)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/freqtrade/freqtrade/optimize/backtesting.py", line 1524, in backtest_one_strategy
preprocessed = self.strategy.advise_all_indicators(data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/freqtrade/freqtrade/strategy/interface.py", line 1587, in advise_all_indicators
pair: self.advise_indicators(pair_data.copy(), {"pair": pair}).copy()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/freqtrade/freqtrade/strategy/interface.py", line 1645, in advise_indicators
return self.populate_indicators(dataframe, metadata)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/freqtrade/user_data/strategies/QuickAdapterV3.py", line 239, in populate_indicators
dataframe = self.freqai.start(dataframe, metadata, self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/freqtrade/freqtrade/freqai/freqai_interface.py", line 161, in start
dk = self.start_backtesting(dataframe, metadata, self.dk, strategy)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/freqtrade/freqtrade/freqai/freqai_interface.py", line 387, in start_backtesting
self.backtesting_fit_live_predictions(dk)
File "/freqtrade/freqtrade/freqai/freqai_interface.py", line 891, in backtesting_fit_live_predictions
dk.full_df.at[index, f"{label}_mean"] = self.dk.data["labels_mean"][
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
KeyError: '&s-extrema'
```
##The config:
```
{
"trading_mode": "futures",
"margin_mode": "isolated",
"max_open_trades": 3,
"stake_currency": "USDT",
"stake_amount": "unlimited",
"tradable_balance_ratio": 0.1,
"fiat_display_currency": "USD",
"dry_run": true,
"timeframe": "3m",
"dry_run_wallet": 1000,
"cancel_open_orders_on_exit": true,
"unfilledtimeout": {
"entry": 10,
"exit": 30
},
"exchange": {
"name": "binance",
"key": "",
"secret": "",
"ccxt_config": {},
"ccxt_async_config": {},
"pair_whitelist": [
"BTC/USDT:USDT","SOL/USDT:USDT","BNB/USDT:USDT","ETH/USDT:USDT"
],
"pair_blacklist": []
},
"entry_pricing": {
"price_side": "same",
"use_order_book": true,
"order_book_top": 1,
"price_last_balance": 0.0,
"check_depth_of_market": {
"enabled": false,
"bids_to_ask_delta": 1
}
},
"exit_pricing": {
"price_side": "other",
"use_order_book": true,
"order_book_top": 1
},
"pairlists": [
{
"method": "StaticPairList"
}
],
"freqai": {
"enabled": true,
"conv_width": 1,
"purge_old_models": 5,
"expiration_hours": 10,
"train_period_days": 14,
"backtest_period_days": 2,
"write_metrics_to_disk": true,
"identifier": "quickadapter-xgboost",
"fit_live_predictions_candles": 600,
"data_kitchen_thread_count": 10,
"track_performance": true,
"extra_returns_per_train": {"DI_value_param1":0, "DI_value_param2":0, "DI_value_param3":0, "DI_cutoff": 2, "&s-minima_sort_threshold":-2, "&s-maxima_sort_threshold":2},
"feature_parameters": {
"include_corr_pairlist": [
],
"include_timeframes": [
"5m",
"15m",
"1h"
],
"label_period_candles": 100,
"include_shifted_candles": 3,
"DI_threshold": 10,
"weight_factor": 0.9,
"principal_component_analysis": false,
"use_SVM_to_remove_outliers": false,
"use_DBSCAN_to_remove_outliers": false,
"indicator_periods_candles": [8, 16, 32],
"inlier_metric_window": 0,
"noise_standard_deviation": 0.02,
"reverse_test_train_order": false,
"plot_feature_importances": 0,
"buffer_train_data_candles": 100
},
"data_split_parameters": {
"test_size": 0,
"random_state": 1,
"shuffle": false
},
"model_training_parameters": {
"n_jobs": 10,
"verbosity": 1
}
},
"bot_name": "",
"force_entry_enable": true,
"initial_state": "running",
"internals": {
"process_throttle_secs": 5
},
"telegram": {
"enabled": true,
"token": "your_telegram_token",
"chat_id": "your_telegram_chat_id"
},
"api_server": {
"enabled": true,
"listen_ip_address": "0.0.0.0",
"listen_port": 8080,
"verbosity": "error",
"jwt_secret_key": "somethingrandom",
"CORS_origins": [],
"username": "freqtrader",
"password": "SuperSecurePassword"
}
}
```
##The strat:
``` python
import logging
from functools import reduce
import datetime
from datetime import timedelta
import talib.abstract as ta
from pandas import DataFrame, Series
from technical import qtpylib
from typing import Optional
from freqtrade.strategy.interface import IStrategy
from technical.pivots_points import pivots_points
from freqtrade.exchange import timeframe_to_prev_date
from freqtrade.persistence import Trade
from scipy.signal import argrelextrema
import numpy as np
import pandas_ta as pta
logger = logging.getLogger(__name__)
class QuickAdapterV3(IStrategy):
"""
The following freqaimodel is released to sponsors of the non-profit FreqAI open-source project.
If you find the FreqAI project useful, please consider supporting it by becoming a sponsor.
We use sponsor money to help stimulate new features and to pay for running these public
experiments, with a an objective of helping the community make smarter choices in their
ML journey.
This strategy is experimental (as with all strategies released to sponsors). Do *not* expect
returns. The goal is to demonstrate gratitude to people who support the project and to
help them find a good starting point for their own creativity.
If you have questions, please direct them to our discord: https://discord.gg/xE4RMg4QYw
https://github.com/sponsors/robcaulk
"""
position_adjustment_enable = False
# Attempts to handle large drops with DCA. High stoploss is required.
stoploss = -0.04
order_types = {
"entry": "limit",
"exit": "market",
"emergency_exit": "market",
"force_exit": "market",
"force_entry": "market",
"stoploss": "market",
"stoploss_on_exchange": False,
"stoploss_on_exchange_interval": 120,
}
# # Example specific variables
max_entry_position_adjustment = 1
# # This number is explained a bit further down
max_dca_multiplier = 2
minimal_roi = {"0": 0.03, "5000": -1}
process_only_new_candles = True
can_short = True
plot_config = {
"main_plot": {},
"subplots": {
"accuracy": {
"accuracy_score": {
"color": "#c28ce3",
"type": "line"
}
},
"extrema": {
"&s-extrema": {
"color": "#f53580",
"type": "line"
},
"&s-minima_sort_threshold": {
"color": "#4ae747",
"type": "line"
},
"&s-maxima_sort_threshold": {
"color": "#5b5e4b",
"type": "line"
}
},
"min_max": {
"maxima": {
"color": "#a29db9",
"type": "line"
},
"minima": {
"color": "#ac7fc",
"type": "bar"
}
}
}
}
@property
def protections(self):
return [
{"method": "CooldownPeriod", "stop_duration_candles": 4},
{
"method": "MaxDrawdown",
"lookback_period_candles": 48,
"trade_limit": 20,
"stop_duration_candles": 4,
"max_allowed_drawdown": 0.2,
},
# {
# "method": "StoplossGuard",
# "lookback_period_candles": 300,
# "trade_limit": 1,
# "stop_duration_candles": 300,
# "only_per_pair": True,
# },
]
use_exit_signal = True
startup_candle_count: int = 80
# # Trailing stop:
# trailing_stop = True
# trailing_stop_positive = 0.01
# trailing_stop_positive_offset = 0.025
# trailing_only_offset_is_reached = True
def feature_engineering_expand_all(self, dataframe, period, **kwargs):
dataframe["%-rsi-period"] = ta.RSI(dataframe, timeperiod=period)
dataframe["%-mfi-period"] = ta.MFI(dataframe, timeperiod=period)
dataframe["%-adx-period"] = ta.ADX(dataframe, window=period)
dataframe["%-cci-period"] = ta.CCI(dataframe, timeperiod=period)
dataframe["%-er-period"] = pta.er(dataframe['close'], length=period)
dataframe["%-rocr-period"] = ta.ROCR(dataframe, timeperiod=period)
dataframe["%-cmf-period"] = chaikin_mf(dataframe, periods=period)
dataframe["%-tcp-period"] = top_percent_change(dataframe, period)
dataframe["%-cti-period"] = pta.cti(dataframe['close'], length=period)
dataframe["%-chop-period"] = qtpylib.chopiness(dataframe, period)
dataframe["%-linear-period"] = ta.LINEARREG_ANGLE(
dataframe['close'], timeperiod=period)
dataframe["%-atr-period"] = ta.ATR(dataframe, timeperiod=period)
dataframe["%-atr-periodp"] = dataframe["%-atr-period"] / \
dataframe['close'] * 1000
return dataframe
def feature_engineering_expand_basic(self, dataframe, **kwargs):
dataframe["%-pct-change"] = dataframe["close"].pct_change()
dataframe["%-raw_volume"] = dataframe["volume"]
dataframe["%-obv"] = ta.OBV(dataframe)
# Added
bollinger = qtpylib.bollinger_bands(qtpylib.typical_price(dataframe), window=14, stds=2.2)
dataframe["bb_lowerband"] = bollinger["lower"]
dataframe["bb_middleband"] = bollinger["mid"]
dataframe["bb_upperband"] = bollinger["upper"]
dataframe["%-bb_width"] = (dataframe["bb_upperband"] -
dataframe["bb_lowerband"]) / dataframe["bb_middleband"]
dataframe["%-ibs"] = ((dataframe['close'] - dataframe['low']) /
(dataframe['high'] - dataframe['low']))
dataframe['ema_50'] = ta.EMA(dataframe, timeperiod=50)
dataframe['ema_12'] = ta.EMA(dataframe, timeperiod=12)
dataframe['ema_26'] = ta.EMA(dataframe, timeperiod=26)
dataframe['%-distema50'] = get_distance(dataframe['close'], dataframe['ema_50'])
dataframe['%-distema12'] = get_distance(dataframe['close'], dataframe['ema_12'])
dataframe['%-distema26'] = get_distance(dataframe['close'], dataframe['ema_26'])
macd = ta.MACD(dataframe)
dataframe['%-macd'] = macd['macd']
dataframe['%-macdsignal'] = macd['macdsignal']
dataframe['%-macdhist'] = macd['macdhist']
dataframe['%-dist_to_macdsignal'] = get_distance(
dataframe['%-macd'], dataframe['%-macdsignal'])
dataframe['%-dist_to_zerohist'] = get_distance(0, dataframe['%-macdhist'])
# VWAP
vwap_low, vwap, vwap_high = VWAPB(dataframe, 20, 1)
dataframe['vwap_upperband'] = vwap_high
dataframe['vwap_middleband'] = vwap
dataframe['vwap_lowerband'] = vwap_low
dataframe['%-vwap_width'] = ((dataframe['vwap_upperband'] -
dataframe['vwap_lowerband']) / dataframe['vwap_middleband']) * 100
dataframe = dataframe.copy()
dataframe['%-dist_to_vwap_upperband'] = get_distance(
dataframe['close'], dataframe['vwap_upperband'])
dataframe['%-dist_to_vwap_middleband'] = get_distance(
dataframe['close'], dataframe['vwap_middleband'])
dataframe['%-dist_to_vwap_lowerband'] = get_distance(
dataframe['close'], dataframe['vwap_lowerband'])
dataframe['%-tail'] = (dataframe['close'] - dataframe['low']).abs()
dataframe['%-wick'] = (dataframe['high'] - dataframe['close']).abs()
pp = pivots_points(dataframe)
dataframe['pivot'] = pp['pivot']
dataframe['r1'] = pp['r1']
dataframe['s1'] = pp['s1']
dataframe['r2'] = pp['r2']
dataframe['s2'] = pp['s2']
dataframe['r3'] = pp['r3']
dataframe['s3'] = pp['s3']
dataframe['rawclose'] = dataframe['close']
dataframe['%-dist_to_r1'] = get_distance(dataframe['close'], dataframe['r1'])
dataframe['%-dist_to_r2'] = get_distance(dataframe['close'], dataframe['r2'])
dataframe['%-dist_to_r3'] = get_distance(dataframe['close'], dataframe['r3'])
dataframe['%-dist_to_s1'] = get_distance(dataframe['close'], dataframe['s1'])
dataframe['%-dist_to_s2'] = get_distance(dataframe['close'], dataframe['s2'])
dataframe['%-dist_to_s3'] = get_distance(dataframe['close'], dataframe['s3'])
dataframe["%-pct-change"] = dataframe["close"].pct_change()
dataframe["%-raw_volume"] = dataframe["volume"]
dataframe["%-raw_price"] = dataframe["close"]
dataframe["%-raw_open"] = dataframe["open"]
dataframe["%-raw_low"] = dataframe["low"]
dataframe["%-raw_high"] = dataframe["high"]
return dataframe
def feature_engineering_standard(self, dataframe, **kwargs):
dataframe["%-day_of_week"] = (dataframe["date"].dt.dayofweek + 1) / 7
dataframe["%-hour_of_day"] = (dataframe["date"].dt.hour + 1) / 25
return dataframe
def set_freqai_targets(self, dataframe, **kwargs):
dataframe["&s-extrema"] = 0
min_peaks = argrelextrema(
dataframe["low"].values, np.less,
order=self.freqai_info["feature_parameters"]["label_period_candles"]
)
max_peaks = argrelextrema(
dataframe["high"].values, np.greater,
order=self.freqai_info["feature_parameters"]["label_period_candles"]
)
for mp in min_peaks[0]:
dataframe.at[mp, "&s-extrema"] = -1
for mp in max_peaks[0]:
dataframe.at[mp, "&s-extrema"] = 1
dataframe["minima"] = np.where(dataframe["&s-extrema"] == -1, 1, 0)
dataframe["maxima"] = np.where(dataframe["&s-extrema"] == 1, 1, 0)
dataframe['&s-extrema'] = dataframe['&s-extrema'].rolling(
window=5, win_type='gaussian', center=True).mean(std=0.5)
return dataframe
def populate_indicators(self, dataframe: DataFrame, metadata: dict) -> DataFrame:
dataframe = self.freqai.start(dataframe, metadata, self)
dataframe["DI_catch"] = np.where(
dataframe["DI_values"] > dataframe["DI_cutoff"], 0, 1,
)
dataframe["minima_sort_threshold"] = dataframe["&s-minima_sort_threshold"]
dataframe["maxima_sort_threshold"] = dataframe["&s-maxima_sort_threshold"]
return dataframe
def populate_entry_trend(self, df: DataFrame, metadata: dict) -> DataFrame:
enter_long_conditions = [
df["do_predict"] == 1,
df["DI_catch"] == 1,
df["&s-extrema"] < df["minima_sort_threshold"],
]
if enter_long_conditions:
df.loc[
reduce(lambda x, y: x & y, enter_long_conditions), [
"enter_long", "enter_tag"]
] = (1, "long")
enter_short_conditions = [
df["do_predict"] == 1,
df["DI_catch"] == 1,
df["&s-extrema"] > df["maxima_sort_threshold"],
]
if enter_short_conditions:
df.loc[
reduce(lambda x, y: x & y, enter_short_conditions), [
"enter_short", "enter_tag"]
] = (1, "short")
return df
def populate_exit_trend(self, df: DataFrame, metadata: dict) -> DataFrame:
return df
def custom_exit(
self,
pair: str,
trade: Trade,
current_time: datetime,
current_rate: float,
current_profit: float,
**kwargs
):
dataframe, _ = self.dp.get_analyzed_dataframe(
pair=pair, timeframe=self.timeframe)
last_candle = dataframe.iloc[-1].squeeze()
trade_date = timeframe_to_prev_date(
self.timeframe, (trade.open_date_utc -
timedelta(minutes=int(self.timeframe[:-1])))
)
trade_candle = dataframe.loc[(dataframe["date"] == trade_date)]
if trade_candle.empty:
return None
trade_candle = trade_candle.squeeze()
entry_tag = trade.enter_tag
trade_duration = (current_time - trade.open_date_utc).seconds / 60
if trade_duration > 1000:
return "trade expired"
if last_candle["DI_catch"] == 0:
return "Outlier detected"
if (
last_candle["&s-extrema"] < last_candle["minima_sort_threshold"]
and entry_tag == "short"
):
return "minimia_detected_short"
if (
last_candle["&s-extrema"] > last_candle["maxima_sort_threshold"]
and entry_tag == "long"
):
return "maxima_detected_long"
def confirm_trade_entry(
self,
pair: str,
order_type: str,
amount: float,
rate: float,
time_in_force: str,
current_time: datetime,
entry_tag: Optional[str],
side: str,
**kwargs
) -> bool:
open_trades = Trade.get_trades(trade_filter=Trade.is_open.is_(True))
num_shorts, num_longs = 0, 0
for trade in open_trades:
if "short" in trade.enter_tag:
num_shorts += 1
elif "long" in trade.enter_tag:
num_longs += 1
if side == "long" and num_longs >= 5:
return False
if side == "short" and num_shorts >= 5:
return False
df, _ = self.dp.get_analyzed_dataframe(pair, self.timeframe)
last_candle = df.iloc[-1].squeeze()
if side == "long":
if rate > (last_candle["close"] * (1 + 0.0025)):
return False
else:
if rate < (last_candle["close"] * (1 - 0.0025)):
return False
return True
def top_percent_change(dataframe: DataFrame, length: int) -> float:
"""
Percentage change of the current close from the range maximum Open price
:param dataframe: DataFrame The original OHLC dataframe
:param length: int The length to look back
"""
if length == 0:
return (dataframe['open'] - dataframe['close']) / dataframe['close']
else:
return (dataframe['open'].rolling(length).max() - dataframe['close']) / dataframe['close']
def chaikin_mf(df, periods=20):
close = df['close']
low = df['low']
high = df['high']
volume = df['volume']
mfv = ((close - low) - (high - close)) / (high - low)
mfv = mfv.fillna(0.0)
mfv *= volume
cmf = mfv.rolling(periods).sum() / volume.rolling(periods).sum()
return Series(cmf, name='cmf')
def VWAPB(dataframe, window_size=20, num_of_std=1):
df = dataframe.copy()
df['vwap'] = qtpylib.rolling_vwap(df, window=window_size)
rolling_std = df['vwap'].rolling(window=window_size).std()
df['vwap_low'] = df['vwap'] - (rolling_std * num_of_std)
df['vwap_high'] = df['vwap'] + (rolling_std * num_of_std)
return df['vwap_low'], df['vwap'], df['vwap_high']
def EWO(dataframe, sma_length=5, sma2_length=35):
df = dataframe.copy()
sma1 = ta.EMA(df, timeperiod=sma_length)
sma2 = ta.EMA(df, timeperiod=sma2_length)
smadif = (sma1 - sma2) / df['close'] * 100
return smadif
def get_distance(p1, p2):
return abs((p1) - (p2))
```
##The model:
``` python
import logging
from typing import Any, Dict, Tuple
from xgboost import XGBRegressor
import time
from freqtrade.freqai.base_models.BaseRegressionModel import BaseRegressionModel
from freqtrade.freqai.data_kitchen import FreqaiDataKitchen
import pandas as pd
import scipy as spy
import numpy.typing as npt
from pandas import DataFrame
import numpy as np
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
logger = logging.getLogger(__name__)
class XGBoostRegressorQuickAdapterV3(BaseRegressionModel):
"""
The following freqaimodel is released to sponsors of the non-profit FreqAI open-source project.
If you find the FreqAI project useful, please consider supporting it by becoming a sponsor.
We use sponsor money to help stimulate new features and to pay for running these public
experiments, with a an objective of helping the community make smarter choices in their
ML journey.
This strategy is experimental (as with all strategies released to sponsors). Do *not* expect
returns. The goal is to demonstrate gratitude to people who support the project and to
help them find a good starting point for their own creativity.
If you have questions, please direct them to our discord: https://discord.gg/xE4RMg4QYw
https://github.com/sponsors/robcaulk
"""
def fit(self, data_dictionary: Dict, dk: FreqaiDataKitchen, **kwargs) -> Any:
"""
User sets up the training and test data to fit their desired model here
:param data_dictionary: the dictionary constructed by DataHandler to hold
all the training and test data/labels.
"""
X = data_dictionary["train_features"]
y = data_dictionary["train_labels"]
if self.freqai_info.get("data_split_parameters", {}).get("test_size", 0.1) == 0:
eval_set = None
eval_weights = None
else:
eval_set = [(data_dictionary["test_features"], data_dictionary["test_labels"])]
eval_weights = [data_dictionary['test_weights']]
sample_weight = data_dictionary["train_weights"]
xgb_model = self.get_init_model(dk.pair)
model = XGBRegressor(**self.model_training_parameters)
start = time.time()
model.fit(X=X, y=y, sample_weight=sample_weight, eval_set=eval_set,
sample_weight_eval_set=eval_weights, xgb_model=xgb_model)
time_spent = (time.time() - start)
self.dd.update_metric_tracker('fit_time', time_spent, dk.pair)
return model
def fit_live_predictions(self, dk: FreqaiDataKitchen, pair: str) -> None:
warmed_up = True
num_candles = self.freqai_info.get('fit_live_predictions_candles', 100)
if self.live:
if not hasattr(self, 'exchange_candles'):
self.exchange_candles = len(self.dd.model_return_values[pair].index)
candle_diff = len(self.dd.historic_predictions[pair].index) - \
(num_candles + self.exchange_candles)
if candle_diff < 0:
logger.warning(
f'Fit live predictions not warmed up yet. Still {abs(candle_diff)} candles to go')
warmed_up = False
pred_df_full = self.dd.historic_predictions[pair].tail(num_candles).reset_index(drop=True)
pred_df_sorted = pd.DataFrame()
for label in pred_df_full.keys():
if pred_df_full[label].dtype == object:
continue
pred_df_sorted[label] = pred_df_full[label]
# pred_df_sorted = pred_df_sorted
for col in pred_df_sorted:
pred_df_sorted[col] = pred_df_sorted[col].sort_values(
ascending=False, ignore_index=True)
frequency = num_candles / (self.freqai_info['feature_parameters']['label_period_candles'] * 2)
max_pred = pred_df_sorted.iloc[:int(frequency)].mean()
min_pred = pred_df_sorted.iloc[-int(frequency):].mean()
if not warmed_up:
dk.data['extra_returns_per_train']['&s-maxima_sort_threshold'] = 2
dk.data['extra_returns_per_train']['&s-minima_sort_threshold'] = -2
else:
dk.data['extra_returns_per_train']['&s-maxima_sort_threshold'] = max_pred['&s-extrema']
dk.data['extra_returns_per_train']['&s-minima_sort_threshold'] = min_pred['&s-extrema']
dk.data["labels_mean"], dk.data["labels_std"] = {}, {}
for ft in dk.label_list:
# f = spy.stats.norm.fit(pred_df_full[ft])
dk.data['labels_std'][ft] = 0 # f[1]
dk.data['labels_mean'][ft] = 0 # f[0]
# fit the DI_threshold
if not warmed_up:
f = [0, 0, 0]
cutoff = 2
else:
f = spy.stats.weibull_min.fit(pred_df_full['DI_values'])
cutoff = spy.stats.weibull_min.ppf(0.999, *f)
dk.data["DI_value_mean"] = pred_df_full['DI_values'].mean()
dk.data["DI_value_std"] = pred_df_full['DI_values'].std()
dk.data['extra_returns_per_train']['DI_value_param1'] = f[0]
dk.data['extra_returns_per_train']['DI_value_param2'] = f[1]
dk.data['extra_returns_per_train']['DI_value_param3'] = f[2]
dk.data['extra_returns_per_train']['DI_cutoff'] = cutoff
```
|
closed
|
2024-09-15T17:20:27Z
|
2024-10-01T16:14:25Z
|
https://github.com/freqtrade/freqtrade/issues/10654
|
[
"Question",
"freqAI"
] |
luca-palese
| 3
|
newpanjing/simpleui
|
django
| 251
|
Uncaught TypeError: Cannot read property 'replace' of undefined
|
#### simple UI 版本:
3.9.1
#### 复现步骤:
登录到后台主页
#### 浏览器 F12 控制台错误日志:
index.js?_=3.9.1:98 Uncaught TypeError: Cannot read property 'replace' of undefined
at window.simple_call (index.js?_=3.9.1:98)
at latest?callback=simple_call:1
#### 出错位置代码片段:
window.simple_call = function (data) {
var o = __simpleui_version.replace(/\./g, '');
var n = data.data.name.replace(/\./g, '');
while(o.length!=3){
o += '0';
}
while(n.length!=3){
n += '0';
}
var oldVersion = parseInt(o)
var newVersion = parseInt(n)
var body = data.data.body;
if (oldVersion < newVersion) {
app.upgrade.isUpdate = true;
app.upgrade.body = body;
app.upgrade.version = data.data.name;
}
}
#### 请求链接:
[https://api.github.com/repos/newpanjing/simpleui/releases/latest?callback=simple_call](https://api.github.com/repos/newpanjing/simpleui/releases/latest?callback=simple_call)
#### 出错原因:
返回数据缺少对应的字段
|
closed
|
2020-04-10T07:25:13Z
|
2020-05-05T05:22:17Z
|
https://github.com/newpanjing/simpleui/issues/251
|
[
"bug"
] |
yinzhuoqun
| 1
|
CorentinJ/Real-Time-Voice-Cloning
|
pytorch
| 307
|
Error using Vocoder pretrained
|
when I click at the "Vocode only" or "Synthesize and vocode" using the pretrained vocode happen this:
cuda runtime error(100) : no CUDA-capable device is detected at ../aten/src/THC/THCGeneral.cpp:50
|
closed
|
2020-04-02T23:21:32Z
|
2020-07-04T22:02:20Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/307
|
[] |
Samtapes
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.