repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
ckan/ckan
|
api
| 8,333
|
Sometimes Internal Server Error is occurred on CKAN UI while adding Resource
|
## CKAN version
2.10.1
I have set up the CKAN Docker setup by following this link: https://github.com/ckan/ckan-docker
## Describe the bug
While adding resources on the CKAN UI, getting 500 internal server error on the UI.
### Steps to reproduce
1. Login to CKAN UI
2. Add Organization and dataset
3. Try to add Resource file (CSV) multiple time in created dataset.
### Error in CKAN Logs
```
Traceback (most recent call last): File "/usr/lib/python3.10/site-packages/requests/adapters.py", line 489, in send resp = conn.urlopen( File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 785, in urlopen retries = retries.increment( File "/usr/lib/python3.10/site-packages/urllib3/util/retry.py", line 550, in increment raise six.reraise(type(error), error, _stacktrace) File "/usr/lib/python3.10/site-packages/urllib3/packages/six.py", line 770, in reraise raise value File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen httplib_response = self._make_request( File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 451, in _make_request self._raise_timeout(err=e, url=url, timeout_value=read_timeout) File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 340, in _raise_timeout raise ReadTimeoutError( urllib3.exceptions.ReadTimeoutError: HTTPConnectionPool(host='datapusher', port=8800): Read timed out. (read timeout=5)
```
### Error in Datapusher Logs
```
Tue Jul 9 07:54:32 2024 - uwsgi_response_writev_headers_and_body_do(): Broken pipe [core/writer.c line 306] during POST /job (192.168.0.3)
OSError: write error
```
### Expected behavior
500 internal server error should not be displayed on the UI and connection read timeout error should not be occurred.
### Additional details
I have observed a read timeout error in the CKAN logs while submitting the job to the data pusher. I tried increasing the ckan.request.timeout value from 5 to 10 and then to 15 in the ckan.ini file. As a result, the frequency of the issue has been reduced.
Does this issue still exist, or is there any solution for this issue?
|
open
|
2024-07-09T09:12:47Z
|
2024-07-10T11:52:37Z
|
https://github.com/ckan/ckan/issues/8333
|
[] |
kumardeepak5
| 2
|
deepspeedai/DeepSpeed
|
deep-learning
| 6,747
|
Model Checkpoint docs are incorrectly rendered on deepspeed.readthedocs.io
|
The top google search hit for most deepspeed documentation searches is https://deepspeed.readthedocs.io/ or its various sub-pages. I assume this export is somehow maintained or at least enabled by the deepspeed team, hence this bug report.
The page on model checkpointing at https://deepspeed.readthedocs.io/en/latest/model-checkpointing.html#model-checkpointing shows empty sections for "Loading Training Checkpoints" and "Saving Training Checkpoints", which I have found confusing on a few occasions. Whereas the rst file in the repo has an `autofunction` declaration inside these sections, e.g. https://github.com/microsoft/DeepSpeed/blob/b692cdea479fba8201584054d654f639e925a265/docs/code-docs/source/model-checkpointing.rst
Evidently the autofunction there is not being correctly rendered by doc export.
|
open
|
2024-11-12T19:52:24Z
|
2024-11-22T21:18:29Z
|
https://github.com/deepspeedai/DeepSpeed/issues/6747
|
[
"bug",
"documentation"
] |
akeshet
| 3
|
holoviz/colorcet
|
matplotlib
| 3
|
license of color scales
|
Hi, I think these color scales are great and would love to have them available to the Julia community. What license are they released under? I would like to copy the scales to the Julia package PlotUtils, which is under the MIT license. A reference to this repo would appear in the source file and in the documentation (at juliaplots.github.io).
Thanks!
|
closed
|
2017-03-13T11:56:14Z
|
2018-08-24T13:40:02Z
|
https://github.com/holoviz/colorcet/issues/3
|
[] |
mkborregaard
| 3
|
huggingface/datasets
|
pytorch
| 7,213
|
Add with_rank to Dataset.from_generator
|
### Feature request
Add `with_rank` to `Dataset.from_generator` similar to `Dataset.map` and `Dataset.filter`.
### Motivation
As for `Dataset.map` and `Dataset.filter`, this is useful when creating cache files using multi-GPU, where the rank can be used to select GPU IDs. For now, rank can be added in the `gen_kwars` argument; however, this, in turn, includes the rank when computing the fingerprint.
### Your contribution
Added #7199 which passes rank based on the `job_id` set by `num_proc`.
|
open
|
2024-10-10T12:15:29Z
|
2024-10-10T12:17:11Z
|
https://github.com/huggingface/datasets/issues/7213
|
[
"enhancement"
] |
muthissar
| 0
|
ClimbsRocks/auto_ml
|
scikit-learn
| 438
|
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 2 concurrent workers. BrokenProcessPool: A task has failed to un-serialize. Please ensure that the arguments of the function are all picklable.
|
open
|
2021-02-11T07:48:07Z
|
2023-10-13T02:28:14Z
|
https://github.com/ClimbsRocks/auto_ml/issues/438
|
[] |
elnurmmmdv
| 1
|
|
horovod/horovod
|
pytorch
| 3,354
|
Trying to get in touch regarding a security issue
|
Hey there!
I belong to an open source security research community, and a member (@srikanthprathi) has found an issue, but doesn’t know the best way to disclose it.
If not a hassle, might you kindly add a `SECURITY.md` file with an email, or another contact method? GitHub [recommends](https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-repository) this best practice to ensure security issues are responsibly disclosed, and it would serve as a simple instruction for security researchers in the future.
Thank you for your consideration, and I look forward to hearing from you!
(cc @huntr-helper)
|
closed
|
2022-01-09T00:13:04Z
|
2022-01-19T19:25:36Z
|
https://github.com/horovod/horovod/issues/3354
|
[] |
JamieSlome
| 3
|
coqui-ai/TTS
|
python
| 2,744
|
[Bug] Config for bark cannot be found
|
### Describe the bug
Attempting to use Bark through TTS API [Using tts = TTS("tts_models/multilingual/multi-dataset/bark"] throws the following error.
```
Traceback (most recent call last):
File "c:\Users\timeb\Desktop\BarkAI\Clone and Product TTS.py", line 7, in <module>
tts = TTS("tts_models/multilingual/multi-dataset/bark")
File "C:\Users\timeb\AppData\Local\Programs\Python\Python39\lib\site-packages\TTS\api.py", line 289, in __init__
self.load_tts_model_by_name(model_name, gpu)
File "C:\Users\timeb\AppData\Local\Programs\Python\Python39\lib\site-packages\TTS\api.py", line 391, in load_tts_model_by_name
self.synthesizer = Synthesizer(
File "C:\Users\timeb\AppData\Local\Programs\Python\Python39\lib\site-packages\TTS\utils\synthesizer.py", line 107, in __init__
self._load_tts_from_dir(model_dir, use_cuda)
File "C:\Users\timeb\AppData\Local\Programs\Python\Python39\lib\site-packages\TTS\utils\synthesizer.py", line 159, in _load_tts_from_dir
config = load_config(os.path.join(model_dir, "config.json"))
File "C:\Users\timeb\AppData\Local\Programs\Python\Python39\lib\site-packages\TTS\config\__init__.py", line 94, in load_config
config_class = register_config(model_name.lower())
File "C:\Users\timeb\AppData\Local\Programs\Python\Python39\lib\site-packages\TTS\config\__init__.py", line 47, in register_config
raise ModuleNotFoundError(f" [!] Config for {model_name} cannot be found.")
ModuleNotFoundError: [!] Config for bark cannot be found.
```
I know of one other bug report on the issue #2722. However that was supposedly fixed in 0.15.2.
### To Reproduce
Run the following example code:
```
from TTS.api import TTS
# Load the model to GPU
# Bark is really slow on CPU, so we recommend using GPU.
tts = TTS("tts_models/multilingual/multi-dataset/bark")
# Cloning a new speaker
# This expects to find a mp3 or wav file like `bark_voices/new_speaker/speaker.wav`
# It computes the cloning values and stores in `bark_voices/new_speaker/speaker.npz`
tts.tts_to_file(text="Hello, how are you?",
file_path="output.wav",
voice_dir="bark_voices/",
speaker="anna")
# When you run it again it uses the stored values to generate the voice.
tts.tts_to_file(text="Hello, how are you?",
file_path="output.wav",
voice_dir="bark_voices/",
speaker="anna")
```
### Expected behavior
Produce output.wav using Bark.
### Logs
_No response_
### Environment
```shell
-OS: Windows
-Python Version: 3.9.13
-TTS Version: 0.15.5
-Using CPU Only
```
### Additional context
_No response_
|
closed
|
2023-07-06T23:15:51Z
|
2023-07-11T19:21:48Z
|
https://github.com/coqui-ai/TTS/issues/2744
|
[
"bug"
] |
JLPiper
| 5
|
fastapi/sqlmodel
|
fastapi
| 346
|
SQLAlchemy-Continuum compatibility
|
### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from typing import Optional
from sqlalchemy_continuum import make_versioned
from sqlmodel import Field, Session, SQLModel, create_engine
from sqlmodel.main import default_registry
# setattr(SQLModel, 'registry', default_registry)
make_versioned(user_cls=None)
class Hero(SQLModel, table=True):
__versioned__ = {}
id: Optional[int] = Field(default=None, primary_key=True)
name: str
secret_name: str
age: Optional[int] = None
hero_1 = Hero(name="Deadpond", secret_name="Dive Wilson")
engine = create_engine("sqlite:///database.db")
SQLModel.metadata.create_all(engine)
with Session(engine) as session:
session.add(hero_1)
session.commit()
session.refresh(hero_1)
print(hero_1)
```
### Description
Very [basic setup](https://sqlalchemy-continuum.readthedocs.io/en/latest/intro.html#installation) of SQLAlchemy-Continuum doesn't work with SQLModel. Following error raised (see [here](https://github.com/kvesteri/sqlalchemy-continuum/blob/1.3.12/sqlalchemy_continuum/factory.py#L10-L13)):
```
AttributeError: type object 'SQLModel' has no attribute '_decl_class_registry'
```
`SQLModel` class misses some expected attribute (`registry` for SQLAlchemy >= 1.4) from SQLAlchemy's `Base`. For debugging purposes the attribute can be manually populated (uncomment `# setattr(SQLModel, 'registry', default_registry)` in the example code) and it helps to proceed to the next error:
```
<skipped>
File ".../.venv/lib/python3.10/site-packages/sqlmodel/main.py", line 277, in __new__
new_cls = super().__new__(cls, name, bases, dict_used, **config_kwargs)
File "pydantic/main.py", line 228, in pydantic.main.ModelMetaclass.__new__
File "pydantic/fields.py", line 488, in pydantic.fields.ModelField.infer
File "pydantic/fields.py", line 419, in pydantic.fields.ModelField.__init__
File "pydantic/fields.py", line 528, in pydantic.fields.ModelField.prepare
File "pydantic/fields.py", line 552, in pydantic.fields.ModelField._set_default_and_type
File "pydantic/fields.py", line 422, in pydantic.fields.ModelField.get_default
File "pydantic/utils.py", line 652, in pydantic.utils.smart_deepcopy
File ".../.venv/lib/python3.10/site-packages/sqlalchemy/sql/elements.py", line 582, in __bool__
raise TypeError("Boolean value of this clause is not defined")
TypeError: Boolean value of this clause is not defined
```
Looks like there is some incompatibility with SQLAlchemy's expected behaviour, because this error was thrown just after SQLModel's `__new__` method call.
### Operating System
macOS
### Operating System Details
_No response_
### SQLModel Version
0.0.6
### Python Version
3.10.2
### Additional Context
_No response_
|
open
|
2022-05-19T13:34:36Z
|
2024-12-05T19:03:11Z
|
https://github.com/fastapi/sqlmodel/issues/346
|
[
"question"
] |
petyunchik
| 9
|
flasgger/flasgger
|
flask
| 35
|
Change "no content" response from 204 status
|
Hi,
is it possible to change the response body when the code is 204?
The only way that I found it was change the javascript code.
|
closed
|
2016-10-14T19:27:35Z
|
2017-03-24T20:19:49Z
|
https://github.com/flasgger/flasgger/issues/35
|
[
"enhancement",
"help wanted"
] |
andryw
| 1
|
paulbrodersen/netgraph
|
matplotlib
| 56
|
Hyperlink or selectable text from annotations?
|
Hey,
I'm currently using the library on a project and have a usecase for linking to a URL based on node attributes. I have the annotation showing on node click, but realized that the link is not selectable or clickable. Would it be possible to do either of those (hyperlinks would be preferred but selectable text is good too)?
Thanks
|
closed
|
2022-12-16T04:13:00Z
|
2022-12-23T20:43:33Z
|
https://github.com/paulbrodersen/netgraph/issues/56
|
[] |
a-arbabian
| 6
|
marshmallow-code/flask-smorest
|
rest-api
| 491
|
grafana + Prometheus + smorest integration
|
how could we integrate grafana and Prometheus with smorest? is there easy solution for it? or is there something to monitor smorest? Prometheus is well integrated with flask-restful?!
|
closed
|
2023-04-16T21:46:46Z
|
2023-08-18T15:58:55Z
|
https://github.com/marshmallow-code/flask-smorest/issues/491
|
[
"question"
] |
justfortest-sketch
| 2
|
dpgaspar/Flask-AppBuilder
|
rest-api
| 1,741
|
update docs website for Flask-AppBuilder `3.4.0`
|
Flask-AppBuilder 3.4.0 was [released 2 days ago](https://github.com/dpgaspar/Flask-AppBuilder/releases/tag/v3.4.0), but the https://flask-appbuilder.readthedocs.io/en/latest/index.html website still only has `3.3.0` visible under the `latest` tag.
|
closed
|
2021-11-12T02:54:43Z
|
2021-11-14T17:46:25Z
|
https://github.com/dpgaspar/Flask-AppBuilder/issues/1741
|
[] |
thesuperzapper
| 2
|
reloadware/reloadium
|
pandas
| 189
|
Hot reloading of HTML files not working with reloadium (works fine with just webpack-dev-server and python manage.py runserver)
|
## Describe the bug*
I have a Django project that uses webpack to serve some frontend assets and for hotreloading js/css/html files.
Hotreloading of HTML files work great when I run django normally.
However, when I run the project using Pycharm addon of Reloadium, the hotreloading of HTML files doesn't work.
The page refresh when I save, but the changes aren't displayed until I manually refresh the page.
Changes made to Python files, css files, js files still works great.
## To Reproduce
Steps to reproduce the behavior:
- Set up a django project that uses webpack with hot-reloading enabled
(I will have a hard time describing the process step by step... I can put the config files below if asked. I will put some bits that seem relevent.)
- Run django project using Reloadium addon
## Expected behavior
When I change the content of my HTML file and save it while using Reloadium, the change should be displayed on my webpage.
## Screenshots
None
## Desktop or remote (please complete the following information):**
- OS: Windows
- OS version: 10
- M1 chip: No
- Reloadium package version: 1.4.0 (latest addon version)
- PyCharm plugin version: 1.4.0 (latest addon version)
- Editor: PyCharm
- Python Version: 3.11.8
- Python Architecture: 64bit
- Run mode: Run & Debug (both modes cause issue)
## Additional context
Some extracts of my config files that may be relevant :
start command used in package.json :
```"start": "npx webpack-dev-server --config webpack/webpack.config.dev.js"```
webpack.config.common.js :
```
const Path = require('path');
const { CleanWebpackPlugin } = require('clean-webpack-plugin');
const CopyWebpackPlugin = require('copy-webpack-plugin');
const HtmlWebpackPlugin = require('html-webpack-plugin');
var BundleTracker = require('webpack-bundle-tracker');
module.exports = {
entry: {
base: Path.resolve(__dirname, '../src/scripts/base.js'),
contact_form: Path.resolve(__dirname, '../src/scripts/contact_form.js'),
},
output: {
path: Path.join(__dirname, '../build'),
filename: 'js/[name].js',
publicPath: '/static/',
},
optimization: {
splitChunks: {
chunks: 'all',
name: false,//'vendors',//false,
},
},
plugins: [
new BundleTracker({
//path: path.join(__dirname, 'frontend/'),
filename: 'webpack-stats.json',
}),
new CleanWebpackPlugin(),
new CopyWebpackPlugin({
patterns: [{ from: Path.resolve(__dirname, '../public'), to: 'public' }],
}),
/*new HtmlWebpackPlugin({
template: Path.resolve(__dirname, '../src/index.html'),
//inject: false,
}),*/
],
resolve: {
alias: {
'~': Path.resolve(__dirname, '../src'),
},
},
module: {
rules: [
{
test: /\.mjs$/,
include: /node_modules/,
type: 'javascript/auto',
},
{
test: /\.html$/i,
loader: 'html-loader',
},
{
test: /\.(ico|jpg|jpeg|png|gif|eot|otf|webp|svg|ttf|woff|woff2)(\?.*)?$/,
type: 'asset'
},
],
},
};
```
webpack.config.dev.js :
```const Path = require('path');
const Webpack = require('webpack');
const { merge } = require('webpack-merge');
const ESLintPlugin = require('eslint-webpack-plugin');
const StylelintPlugin = require('stylelint-webpack-plugin');
const common = require('./webpack.common.js');
module.exports = merge(common, {
target: 'web',
mode: 'development',
devtool: 'eval-cheap-source-map',
output: {
chunkFilename: 'js/[name].chunk.js',
publicPath: 'http://localhost:9091/static/',
},
devServer: {
client: {
logging: 'error',
},
static:
[{directory: Path.join(__dirname, "../public")}
,{directory: Path.join(__dirname, "../src/images")}
,{directory: Path.join(__dirname, "../../accueil/templates/accueil")}
,{directory: Path.join(__dirname, "../../templates")}
]
,
compress: true,
watchFiles: [
//Path.join(__dirname, '../../**/*.py'),
Path.join(__dirname, '../src/**/*.js'),
Path.join(__dirname, '../src/**/*.scss'),
Path.join(__dirname, '../../templates/**/*.html'),
Path.join(__dirname, '../../accueil/templates/accueil/*.html'),
],
devMiddleware: {
writeToDisk: true,
},
port: 9091,
headers: {
"Access-Control-Allow-Origin": "*",
}
},
plugins: [
new Webpack.DefinePlugin({
'process.env.NODE_ENV': JSON.stringify('development'),
}),
new ESLintPlugin({
extensions: 'js',
emitWarning: true,
files: Path.resolve(__dirname, '../src'),
fix: true,
}),
new StylelintPlugin({
files: Path.join('src', '**/*.s?(a|c)ss'),
fix: true,
}),
],
module: {
rules: [
/*{ //Added from default
test: /\.html$/i,
loader: "html-loader",
},*/
{
test: /\.js$/,
include: Path.resolve(__dirname, '../src'),
loader: 'babel-loader',
},
{
test: /\.s?css$/i,
include: Path.resolve(__dirname, '../src'),
use: [
'style-loader',
{
loader: 'css-loader',
options: {
sourceMap: true,
},
},
'postcss-loader',
'sass-loader',
],
},
],
},
});
```
|
closed
|
2024-04-03T17:38:26Z
|
2024-05-20T23:42:57Z
|
https://github.com/reloadware/reloadium/issues/189
|
[] |
antoineprobst
| 2
|
quantmind/pulsar
|
asyncio
| 15
|
Profiling task
|
Add functionality to profile a task run using the python profiler.
|
closed
|
2012-02-10T12:06:14Z
|
2012-12-13T22:06:20Z
|
https://github.com/quantmind/pulsar/issues/15
|
[
"taskqueue"
] |
lsbardel
| 1
|
PeterL1n/RobustVideoMatting
|
computer-vision
| 67
|
前景预测
|
请问在训练阶段,有遇到第一阶段前景预测图整体偏红或者偏绿的情况吗?有什么解决办法吗?
|
closed
|
2021-10-08T04:45:12Z
|
2021-12-16T01:24:48Z
|
https://github.com/PeterL1n/RobustVideoMatting/issues/67
|
[] |
lyc6749
| 10
|
chaos-genius/chaos_genius
|
data-visualization
| 1,007
|
[BUG] Dependency conflict for PyArrow
|
## Describe the bug
PyAthena installs version of PyArrow that is incompatible with Snowflake's Pyarrow requirement
## Explain the environment
- **Chaos Genius version**: 0.9.0 development build
- **OS Version / Instance**: any
- **Deployment type**: any
## Current behavior
warning while pip install
## Expected behavior
pip install warning should not be there
## Screenshots

## Additional context
N/A
## Logs
`
/home/samyak/work/ChaosGenius/chaos_genius/.direnv/python-3.8.13/lib64/python3.8/site-packages/snowflake/connector/options.py:96: UserWarning: You have an incompatible version of 'pyarrow' installed (8.0.0), pleas
e install a version that adheres to: 'pyarrow<6.1.0,>=6.0.0; extra == "pandas"'
warn_incompatible_dep(
`
|
closed
|
2022-06-27T11:16:43Z
|
2022-06-29T12:39:08Z
|
https://github.com/chaos-genius/chaos_genius/issues/1007
|
[] |
rjdp
| 0
|
explosion/spaCy
|
nlp
| 13,712
|
thinc conflicting versions
|
## How to reproduce the behaviour
I had an issue with thinc when updated I numpy:
`thinc 8.3.2 has requirement numpy<2.1.0,>=2.0.0; python_version >= "3.9", but you have numpy 2.1.3.`
Therefore I updated thinc to its latest version `9.1.1` but now I'm having an issue with spaCy
`spacy 3.8.2 depends on thinc<8.4.0 and >=8.3.0`
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System: WSL Ubuntu 20.04.6
* Python Version Used: Python 3.11
* spaCy Version Used: spacy 3.8.2
|
open
|
2024-12-09T16:43:01Z
|
2024-12-10T16:43:18Z
|
https://github.com/explosion/spaCy/issues/13712
|
[] |
kimlizette
| 1
|
KevinMusgrave/pytorch-metric-learning
|
computer-vision
| 176
|
Add a way of computing logits easily during inference
|
Right now you can only compute the loss, but there are cases where people want to do classification using the logits (see https://github.com/KevinMusgrave/pytorch-metric-learning/issues/175)
|
closed
|
2020-08-11T04:18:20Z
|
2020-09-14T09:56:47Z
|
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/176
|
[
"enhancement",
"fixed in dev branch"
] |
KevinMusgrave
| 0
|
xuebinqin/U-2-Net
|
computer-vision
| 149
|
Taking Too Much CPU and Memory (RAM) on CPU
|
@NathanUA Thank you very much for such a good contribution.I am using the u2netp model which is 4.7 mb size
for inferences on the webcam video feed. I am running it on my CPU device but it is taking too much memory and processing power of CPU Device .My Device get slower while doing testing , Can you help me how I can make it memory efficient so I can use in CPU machine.Thank you
|
open
|
2021-01-20T16:38:19Z
|
2021-03-17T09:36:08Z
|
https://github.com/xuebinqin/U-2-Net/issues/149
|
[] |
NaeemKhan333
| 1
|
graphistry/pygraphistry
|
jupyter
| 50
|
FAQ / guide on exporting
|
cc @thibaudh @padentomasello @briantrice
|
closed
|
2016-01-11T21:50:51Z
|
2020-06-10T06:46:52Z
|
https://github.com/graphistry/pygraphistry/issues/50
|
[
"enhancement"
] |
lmeyerov
| 0
|
pydata/xarray
|
pandas
| 10,166
|
Updating zarr causes errors when saving to zarr.
|
### What is your issue?
xarray version 2024.9.0
zarr version 3.0.5
When attempting to save to zarr, the error below results. I can save the same file to zarr happily using zarr version 2.18.4. I've checked and the same thing happens for a wide range of files. Reseting the encoding has no effect.
[tas_AUST-04_ERA5_historical_hres_BOM_BARRA-C2_v1_mon_197901-197901(1).zip](https://github.com/user-attachments/files/19414274/tas_AUST-04_ERA5_historical_hres_BOM_BARRA-C2_v1_mon_197901-197901.1.zip)
```
ds = xr.open_dataset('/scratch/nhat_drf_data/zarr_sandbox/tas_AUST-04_ERA5_historical_hres_BOM_BARRA-C2_v1_mon_197901-197901(1).nc')
ds.to_zarr('/scratch/nhat_drf_data/zarr_sandbox/test.zarr')
```
gives the error
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
File /opt/anaconda3/envs/nhat_eval/lib/python3.12/site-packages/zarr/core/common.py:139, in parse_shapelike(data)
138 try:
--> 139 data_tuple = tuple(data)
140 except TypeError as e:
TypeError: 'NoneType' object is not iterable
The above exception was the direct cause of the following exception:
TypeError Traceback (most recent call last)
Cell In[3], line 1
----> 1 ds.to_zarr('/scratch/nhat_drf_data/zarr_sandbox/test.zarr')
File /opt/anaconda3/envs/nhat_eval/lib/python3.12/site-packages/xarray/core/dataset.py:2562, in Dataset.to_zarr(self, store, chunk_store, mode, synchronizer, group, encoding, compute, consolidated, append_dim, region, safe_chunks, storage_options, zarr_version, write_empty_chunks, chunkmanager_store_kwargs)
2415 """Write dataset contents to a zarr group.
2416
2417 Zarr chunks are determined in the following way:
(...) 2558 The I/O user guide, with more details and examples.
2559 """
2560 from xarray.backends.api import to_zarr
-> 2562 return to_zarr( # type: ignore[call-overload,misc]
2563 self,
2564 store=store,
2565 chunk_store=chunk_store,
2566 storage_options=storage_options,
2567 mode=mode,
2568 synchronizer=synchronizer,
2569 group=group,
2570 encoding=encoding,
2571 compute=compute,
2572 consolidated=consolidated,
2573 append_dim=append_dim,
2574 region=region,
2575 safe_chunks=safe_chunks,
2576 zarr_version=zarr_version,
2577 write_empty_chunks=write_empty_chunks,
2578 chunkmanager_store_kwargs=chunkmanager_store_kwargs,
2579 )
File /opt/anaconda3/envs/nhat_eval/lib/python3.12/site-packages/xarray/backends/api.py:1784, in to_zarr(dataset, store, chunk_store, mode, synchronizer, group, encoding, compute, consolidated, append_dim, region, safe_chunks, storage_options, zarr_version, write_empty_chunks, chunkmanager_store_kwargs)
1782 writer = ArrayWriter()
1783 # TODO: figure out how to properly handle unlimited_dims
-> 1784 dump_to_store(dataset, zstore, writer, encoding=encoding)
1785 writes = writer.sync(
1786 compute=compute, chunkmanager_store_kwargs=chunkmanager_store_kwargs
1787 )
1789 if compute:
File /opt/anaconda3/envs/nhat_eval/lib/python3.12/site-packages/xarray/backends/api.py:1467, in dump_to_store(dataset, store, writer, encoder, encoding, unlimited_dims)
1464 if encoder:
1465 variables, attrs = encoder(variables, attrs)
-> 1467 store.store(variables, attrs, check_encoding, writer, unlimited_dims=unlimited_dims)
File /opt/anaconda3/envs/nhat_eval/lib/python3.12/site-packages/xarray/backends/zarr.py:720, in ZarrStore.store(self, variables, attributes, check_encoding_set, writer, unlimited_dims)
717 else:
718 variables_to_set = variables_encoded
--> 720 self.set_variables(
721 variables_to_set, check_encoding_set, writer, unlimited_dims=unlimited_dims
722 )
723 if self._consolidate_on_close:
724 zarr.consolidate_metadata(self.zarr_group.store)
File /opt/anaconda3/envs/nhat_eval/lib/python3.12/site-packages/xarray/backends/zarr.py:824, in ZarrStore.set_variables(self, variables, check_encoding_set, writer, unlimited_dims)
821 else:
822 encoding["write_empty_chunks"] = self._write_empty
--> 824 zarr_array = self.zarr_group.create(
825 name,
826 shape=shape,
827 dtype=dtype,
828 fill_value=fill_value,
829 **encoding,
830 )
831 zarr_array = _put_attrs(zarr_array, encoded_attrs)
833 write_region = self._write_region if self._write_region is not None else {}
File /opt/anaconda3/envs/nhat_eval/lib/python3.12/site-packages/zarr/core/group.py:2354, in Group.create(self, *args, **kwargs)
2352 def create(self, *args: Any, **kwargs: Any) -> Array:
2353 # Backwards compatibility for 2.x
-> 2354 return self.create_array(*args, **kwargs)
File /opt/anaconda3/envs/nhat_eval/lib/python3.12/site-packages/zarr/_compat.py:43, in _deprecate_positional_args.<locals>._inner_deprecate_positional_args.<locals>.inner_f(*args, **kwargs)
41 extra_args = len(args) - len(all_args)
42 if extra_args <= 0:
---> 43 return f(*args, **kwargs)
45 # extra_args > 0
46 args_msg = [
47 f"{name}={arg}"
48 for name, arg in zip(kwonly_args[:extra_args], args[-extra_args:], strict=False)
49 ]
File /opt/anaconda3/envs/nhat_eval/lib/python3.12/site-packages/zarr/core/group.py:2473, in Group.create_array(self, name, shape, dtype, chunks, shards, filters, compressors, compressor, serializer, fill_value, order, attributes, chunk_key_encoding, dimension_names, storage_options, overwrite, config)
2378 """Create an array within this group.
2379
2380 This method lightly wraps :func:`zarr.core.array.create_array`.
(...) 2467 AsyncArray
2468 """
2469 compressors = _parse_deprecated_compressor(
2470 compressor, compressors, zarr_format=self.metadata.zarr_format
2471 )
2472 return Array(
-> 2473 self._sync(
2474 self._async_group.create_array(
2475 name=name,
2476 shape=shape,
2477 dtype=dtype,
2478 chunks=chunks,
2479 shards=shards,
2480 fill_value=fill_value,
2481 attributes=attributes,
2482 chunk_key_encoding=chunk_key_encoding,
2483 compressors=compressors,
2484 serializer=serializer,
2485 dimension_names=dimension_names,
2486 order=order,
2487 filters=filters,
2488 overwrite=overwrite,
2489 storage_options=storage_options,
2490 config=config,
2491 )
2492 )
2493 )
File /opt/anaconda3/envs/nhat_eval/lib/python3.12/site-packages/zarr/core/sync.py:208, in SyncMixin._sync(self, coroutine)
205 def _sync(self, coroutine: Coroutine[Any, Any, T]) -> T:
206 # TODO: refactor this to to take *args and **kwargs and pass those to the method
207 # this should allow us to better type the sync wrapper
--> 208 return sync(
209 coroutine,
210 timeout=config.get("async.timeout"),
211 )
File /opt/anaconda3/envs/nhat_eval/lib/python3.12/site-packages/zarr/core/sync.py:163, in sync(coro, loop, timeout)
160 return_result = next(iter(finished)).result()
162 if isinstance(return_result, BaseException):
--> 163 raise return_result
164 else:
165 return return_result
File /opt/anaconda3/envs/nhat_eval/lib/python3.12/site-packages/zarr/core/sync.py:119, in _runner(coro)
114 """
115 Await a coroutine and return the result of running it. If awaiting the coroutine raises an
116 exception, the exception will be returned.
117 """
118 try:
--> 119 return await coro
120 except Exception as ex:
121 return ex
File /opt/anaconda3/envs/nhat_eval/lib/python3.12/site-packages/zarr/core/group.py:1102, in AsyncGroup.create_array(self, name, shape, dtype, chunks, shards, filters, compressors, compressor, serializer, fill_value, order, attributes, chunk_key_encoding, dimension_names, storage_options, overwrite, config)
1007 """Create an array within this group.
1008
1009 This method lightly wraps :func:`zarr.core.array.create_array`.
(...) 1097
1098 """
1099 compressors = _parse_deprecated_compressor(
1100 compressor, compressors, zarr_format=self.metadata.zarr_format
1101 )
-> 1102 return await create_array(
1103 store=self.store_path,
1104 name=name,
1105 shape=shape,
1106 dtype=dtype,
1107 chunks=chunks,
1108 shards=shards,
1109 filters=filters,
1110 compressors=compressors,
1111 serializer=serializer,
1112 fill_value=fill_value,
1113 order=order,
1114 zarr_format=self.metadata.zarr_format,
1115 attributes=attributes,
1116 chunk_key_encoding=chunk_key_encoding,
1117 dimension_names=dimension_names,
1118 storage_options=storage_options,
1119 overwrite=overwrite,
1120 config=config,
1121 )
File /opt/anaconda3/envs/nhat_eval/lib/python3.12/site-packages/zarr/core/array.py:4146, in create_array(store, name, shape, dtype, data, chunks, shards, filters, compressors, serializer, fill_value, order, zarr_format, attributes, chunk_key_encoding, dimension_names, storage_options, overwrite, config, write_data)
4141 store_path = await make_store_path(store, path=name, mode=mode, storage_options=storage_options)
4143 data_parsed, shape_parsed, dtype_parsed = _parse_data_params(
4144 data=data, shape=shape, dtype=dtype
4145 )
-> 4146 result = await init_array(
4147 store_path=store_path,
4148 shape=shape_parsed,
4149 dtype=dtype_parsed,
4150 chunks=chunks,
4151 shards=shards,
4152 filters=filters,
4153 compressors=compressors,
4154 serializer=serializer,
4155 fill_value=fill_value,
4156 order=order,
4157 zarr_format=zarr_format,
4158 attributes=attributes,
4159 chunk_key_encoding=chunk_key_encoding,
4160 dimension_names=dimension_names,
4161 overwrite=overwrite,
4162 config=config,
4163 )
4165 if write_data is True and data_parsed is not None:
4166 await result._set_selection(
4167 BasicIndexer(..., shape=result.shape, chunk_grid=result.metadata.chunk_grid),
4168 data_parsed,
4169 prototype=default_buffer_prototype(),
4170 )
File /opt/anaconda3/envs/nhat_eval/lib/python3.12/site-packages/zarr/core/array.py:3989, in init_array(store_path, shape, dtype, chunks, shards, filters, compressors, serializer, fill_value, order, zarr_format, attributes, chunk_key_encoding, dimension_names, overwrite, config)
3986 chunks_out = chunk_shape_parsed
3987 codecs_out = sub_codecs
-> 3989 meta = AsyncArray._create_metadata_v3(
3990 shape=shape_parsed,
3991 dtype=dtype_parsed,
3992 fill_value=fill_value,
3993 chunk_shape=chunks_out,
3994 chunk_key_encoding=chunk_key_encoding_parsed,
3995 codecs=codecs_out,
3996 dimension_names=dimension_names,
3997 attributes=attributes,
3998 )
4000 arr = AsyncArray(metadata=meta, store_path=store_path, config=config)
4001 await arr._save_metadata(meta, ensure_parents=True)
File /opt/anaconda3/envs/nhat_eval/lib/python3.12/site-packages/zarr/core/array.py:694, in AsyncArray._create_metadata_v3(shape, dtype, chunk_shape, fill_value, chunk_key_encoding, codecs, dimension_names, attributes)
687 if dtype.kind in "UTS":
688 warn(
689 f"The dtype `{dtype}` is currently not part in the Zarr format 3 specification. It "
690 "may not be supported by other zarr implementations and may change in the future.",
691 category=UserWarning,
692 stacklevel=2,
693 )
--> 694 chunk_grid_parsed = RegularChunkGrid(chunk_shape=chunk_shape)
695 return ArrayV3Metadata(
696 shape=shape,
697 data_type=dtype,
(...) 703 attributes=attributes or {},
704 )
File /opt/anaconda3/envs/nhat_eval/lib/python3.12/site-packages/zarr/core/chunk_grids.py:176, in RegularChunkGrid.__init__(self, chunk_shape)
175 def __init__(self, *, chunk_shape: ChunkCoordsLike) -> None:
--> 176 chunk_shape_parsed = parse_shapelike(chunk_shape)
178 object.__setattr__(self, "chunk_shape", chunk_shape_parsed)
File /opt/anaconda3/envs/nhat_eval/lib/python3.12/site-packages/zarr/core/common.py:142, in parse_shapelike(data)
140 except TypeError as e:
141 msg = f"Expected an integer or an iterable of integers. Got {data} instead."
--> 142 raise TypeError(msg) from e
144 if not all(isinstance(v, int) for v in data_tuple):
145 msg = f"Expected an iterable of integers. Got {data} instead."
TypeError: Expected an integer or an iterable of integers. Got None instead.
[tas_AUST-04_ERA5_historical_hres_BOM_BARRA-C2_v1_mon_197901-197901(1).zip](https://github.com/user-attachments/files/19414266/tas_AUST-04_ERA5_historical_hres_BOM_BARRA-C2_v1_mon_197901-197901.1.zip)
```
|
closed
|
2025-03-24T04:27:09Z
|
2025-03-24T04:32:47Z
|
https://github.com/pydata/xarray/issues/10166
|
[
"needs triage"
] |
bweeding
| 1
|
klen/mixer
|
sqlalchemy
| 37
|
Create a way to define custom Types and Generators and error descriptively when Types are not recognized.
|
I defined a couple custom SQLAlchemy types with [`sqlalchemy.types.TypeDecorator`](http://docs.sqlalchemy.org/en/rel_0_9/core/custom_types.html#augmenting-existing-types), and immediately Mixer choked on them. I had to reverse-engineer the system and monkey-patch my custom types in order to get it working.
Here's a link to my [Stack Overflow post](http://stackoverflow.com/questions/26416307/missing-parameter-in-function/28362205#28362205) about this, including the hacky solution I implemented, and here's the traceback I got:
```
mixer: ERROR: Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/mixer/main.py", line 576, in blend
return type_mixer.blend(**values)
File "/usr/local/lib/python2.7/site-packages/mixer/main.py", line 125, in blend
for name, value in defaults.items()
File "/usr/local/lib/python2.7/site-packages/mixer/main.py", line 125, in <genexpr>
for name, value in defaults.items()
File "/usr/local/lib/python2.7/site-packages/mixer/mix_types.py", line 220, in gen_value
return type_mixer.gen_field(field)
File "/usr/local/lib/python2.7/site-packages/mixer/main.py", line 202, in gen_field
return self.gen_value(field.name, field, unique=unique)
File "/usr/local/lib/python2.7/site-packages/mixer/main.py", line 245, in gen_value
fab = self.get_fabric(field, field_name, fake=fake)
File "/usr/local/lib/python2.7/site-packages/mixer/main.py", line 290, in get_fabric
self.__fabrics[key] = self.make_fabric(field.scheme, field_name, fake)
File "/usr/local/lib/python2.7/site-packages/mixer/backend/sqlalchemy.py", line 178, in make_fabric
stype, field_name=field_name, fake=fake, kwargs=kwargs)
File "/usr/local/lib/python2.7/site-packages/mixer/main.py", line 306, in make_fabric
fab = self.__factory.get_fabric(scheme, field_name, fake)
File "/usr/local/lib/python2.7/site-packages/mixer/factory.py", line 158, in get_fabric
if not func and fcls.__bases__:
AttributeError: Mixer (myproject.models.MyModel): 'NoneType' object has no attribute '__bases__'
```
Anyway, thanks for making Mixer! It's a really powerful tool, and I love anything that makes testing easier.
|
closed
|
2015-02-06T09:29:40Z
|
2015-08-12T15:01:27Z
|
https://github.com/klen/mixer/issues/37
|
[] |
wolverdude
| 1
|
jschneier/django-storages
|
django
| 1,129
|
Empty SpooledTemporaryFile` created when `closed` property is used multiple times
|
When `GoogleCloudFile`'s `closed` property is used multiple times, the spooled temporary `_file` is created when its previously closed. This is because https://github.com/django/django/blob/6f453cd2981525b33925faaadc7a6e51fa90df5c/django/core/files/utils.py#L53 uses `self.file` which triggers a creation of `_file` regardless. As a result, when `.close()` is called after, an empty file is uploaded to GS.
|
open
|
2022-04-22T07:17:38Z
|
2022-04-22T07:17:38Z
|
https://github.com/jschneier/django-storages/issues/1129
|
[] |
skylander86
| 0
|
encode/apistar
|
api
| 543
|
is this a bug?
|

got an error where visit localhost:5556/docs
Could not build url for endpoint 'static' with values ['filename']. Did you mean 'serve_static_asgi' instead?
**and i find self.map._rules_by_endpoint not contain the key of static**

|
closed
|
2018-05-15T10:49:54Z
|
2018-05-18T02:40:17Z
|
https://github.com/encode/apistar/issues/543
|
[] |
goushicui
| 4
|
apify/crawlee-python
|
web-scraping
| 403
|
Evaluate the efficiency of opening new Playwright tabs versus windows
|
Try to experiment with [PlaywrightBrowserController](https://github.com/apify/crawlee-python/blob/master/src/crawlee/browsers/playwright_browser_controller.py) to determine whether opening new Playwright pages in tabs offers better performance compared to opening them in separate windows (current state).
|
open
|
2024-08-06T07:47:10Z
|
2024-08-06T08:31:05Z
|
https://github.com/apify/crawlee-python/issues/403
|
[
"t-tooling",
"solutioning"
] |
vdusek
| 1
|
ansible/awx
|
django
| 14,918
|
variables and source_vars expect dictionary but apply as yaml (multiple modules)
|
### Please confirm the following
- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.
- [X] I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `security@ansible.com` instead.)
### Bug Summary
Dictionary with provided variables seems to be applying as yaml:
- inventories: variables
- hosts: variables
- groups: variables
- inventory sources: source_vars
Also, see https://github.com/ansible/awx/issues/14842
### AWX version
23.2.0
### Select the relevant components
- [ ] UI
- [ ] UI (tech preview)
- [ ] API
- [ ] Docs
- [X] Collection
- [ ] CLI
- [ ] Other
### Installation method
docker development environment
### Modifications
no
### Ansible version
2.14.2
### Operating system
Red Hat Enterprise Linux release 9.1 (Plow)
### Web browser
Chrome
### Steps to reproduce
```
- ansible.controller.inventory:
name: "Dynamic A"
organization: "A"
variables: {tt: tt}
- ansible.controller.host:
name: "Host A"
inventory: "Static A"
variables: {tt: tt}
- ansible.controller.group:
name: "Group A"
inventory: "Static A"
variables: {tt: tt}
- ansible.controller.inventory_source:
name: "A"
inventory: "Dynamic A"
source_vars: {tt: tt}
```
### Expected results
```
YAML:
---
tt: tt
```
### Actual results
```
YAML:
{"tt": "tt"}
```
### Additional information
_No response_
|
open
|
2024-02-25T08:27:40Z
|
2024-02-25T08:32:12Z
|
https://github.com/ansible/awx/issues/14918
|
[
"type:bug",
"component:awx_collection",
"needs_triage",
"community"
] |
kk-at-redhat
| 0
|
microsoft/MMdnn
|
tensorflow
| 832
|
Convert trained yolov3 Darknet-53 custom model to tensorflow model
|
Platform: ubuntu 16.04
Python version: 3.6
Source framework: mxnet (gluon-cv)
Destination framework: Tensorflow
Model Type: Object detection
Pre-trained model path: [https://gluon-cv.mxnet.io/build/examples_detection/demo_yolo.html#sphx-glr-build-examples-detection-demo-yolo-py](url)
I have trained a custom yolov3 Darknet-53 model (yolo3_darknet53_custom.params) using gluon-cv (mxnet). I need to convert the yolo3_darknet53_custom.params (mxnet) model to yolo3_darknet53_custom.pb (tensorflow)
Also, I see https://pypi.org/project/mmdnn/ object detection is under on-going Models.
Queries:
Does mmdnn supports yolo models (object detection) conversion trained using gluon-cv (mxnet) in general?
Is there any way or work around by which I can convert yolo models?
Any leads would be great!
Thank you
|
open
|
2020-05-07T14:13:04Z
|
2020-05-08T18:15:42Z
|
https://github.com/microsoft/MMdnn/issues/832
|
[] |
analyticalrohit
| 1
|
chainer/chainer
|
numpy
| 8,199
|
flaky test: `chainerx_tests/unit_tests/routines_tests/test_normalization.py::test_BatchNorm`
| ERROR: type should be string, got "https://travis-ci.org/chainer/chainer/jobs/591364861\r\n\r\n`FAIL tests/chainerx_tests/unit_tests/routines_tests/test_normalization.py::test_BatchNorm_param_0_{contiguous=None}_param_0_{decay=None}_param_0_{eps=2e-05}_param_0_{param_dtype='float16'}_param_0_{x_dtype='float16'}_param_3_{axis=None, reduced_shape=(3, 4, 5, 2), x_shape=(2, 3, 4, 5, 2)}[native:0]`\r\n\r\n```\r\n[2019-09-30 08:05:44] E chainer.testing.function_link.FunctionTestError: Outputs do not match the expected values.\r\n[2019-09-30 08:05:44] E Indices of outputs that do not match: 0\r\n[2019-09-30 08:05:44] E Expected shapes and dtypes: (2, 3, 4, 5, 2):float16\r\n[2019-09-30 08:05:44] E Actual shapes and dtypes: (2, 3, 4, 5, 2):float16\r\n[2019-09-30 08:05:44] E \r\n[2019-09-30 08:05:44] E \r\n[2019-09-30 08:05:44] E Error details of output [0]:\r\n[2019-09-30 08:05:44] E \r\n[2019-09-30 08:05:44] E Not equal to tolerance rtol=0.1, atol=0.1\r\n[2019-09-30 08:05:44] E \r\n[2019-09-30 08:05:44] E (mismatch 0.4166666666666714%)\r\n[2019-09-30 08:05:44] E x: array([-0.1726 , 0.3286 , 0.4915 , 1.105 , 0.2152 , 0.587 ,\r\n[2019-09-30 08:05:44] E -0.2905 , -0.012695, 0.554 , -1.178 , -0.3076 , -0.4988 ,\r\n[2019-09-30 08:05:44] E -0.3975 , 0.4343 , 0.813 , 1.951 , 0.2693 , 0.758 ,...\r\n[2019-09-30 08:05:44] E y: array([-0.1726 , 0.3286 , 0.4915 , 1.105 , 0.2155 , 0.587 ,\r\n[2019-09-30 08:05:44] E -0.2905 , -0.012695, 0.554 , -1.178 , -0.3071 , -0.4988 ,\r\n[2019-09-30 08:05:44] E -0.3975 , 0.4343 , 0.813 , 1.951 , 0.269 , 0.758 ,...\r\n[2019-09-30 08:05:44] E \r\n[2019-09-30 08:05:44] E assert_allclose failed: \r\n[2019-09-30 08:05:44] E shape: (2, 3, 4, 5, 2) (2, 3, 4, 5, 2)\r\n[2019-09-30 08:05:44] E dtype: float16 float16\r\n[2019-09-30 08:05:44] E i: (0, 2, 0, 4, 1)\r\n[2019-09-30 08:05:44] E x[i]: 0.070556640625\r\n[2019-09-30 08:05:44] E y[i]: -0.03399658203125\r\n[2019-09-30 08:05:44] E relative error[i]: 3.076171875\r\n[2019-09-30 08:05:44] E absolute error[i]: 0.10455322265625\r\n:\r\n```"
|
closed
|
2019-09-30T08:25:24Z
|
2019-10-29T04:56:41Z
|
https://github.com/chainer/chainer/issues/8199
|
[
"cat:test",
"prio:high"
] |
niboshi
| 1
|
mwaskom/seaborn
|
matplotlib
| 2,861
|
Legends don't represent additional visual properties
|
This is a meta issue replacing the following (exhaustive) list of reports:
- #2852
- #2005
- #1763
- #940
The basic issue is that artists in seaborn legends typically do not look exactly like artists in the plot in cases where additional matplotlib keyword arguments have been passed. This is semi-intentional (more like, it was an intentional decision to avoid the complexity of making this work) but is a source of confusion.
This does work as expected in the new objects interface, so I would expect it to eventually be resolved in the function interface once they are refactored to use that behind the scenes. Although the legend code in the objects interface is complex and probably a little buggy at the moment. I don't know whether there will be any effort to improve this aspect of legends in the plotting functions without larger changes to the codebase; legends are hard (#2231).
|
closed
|
2022-06-15T02:53:19Z
|
2023-09-11T01:18:03Z
|
https://github.com/mwaskom/seaborn/issues/2861
|
[
"rough-edge",
"plots"
] |
mwaskom
| 0
|
seleniumbase/SeleniumBase
|
web-scraping
| 3,266
|
Add a full range of scroll methods for CDP Mode
|
### Add a full range of scroll methods for CDP Mode
----
Once this task is completed, we should expect to see all these methods:
```python
sb.cdp.scroll_into_view(selector) # Scroll to the element
sb.cdp.scroll_to_y(y) # Scroll to the y-position
sb.cdp.scroll_to_top() # Scroll to the top
sb.cdp.scroll_to_bottom() # Scroll to the bottom
sb.cdp.scroll_up(amount=25) # Relative scroll by amount
sb.cdp.scroll_down(amount=25) # Relative scroll by amount
```
|
closed
|
2024-11-14T19:25:54Z
|
2024-11-14T21:45:09Z
|
https://github.com/seleniumbase/SeleniumBase/issues/3266
|
[
"enhancement",
"UC Mode / CDP Mode"
] |
mdmintz
| 1
|
django-oscar/django-oscar
|
django
| 4,225
|
Sandbox site is down
|
[Sandbox](http://latest.oscarcommerce.com/) is down. Sandbox site was using heroku for deployments I assume heroku's free tier was being used but they changed their policy awhile ago or their credits have run out.
[Removal of Heroku Free Product Plans FAQ](https://help.heroku.com/RSBRUH58/removal-of-heroku-free-product-plans-faq)
|
open
|
2024-01-14T19:44:28Z
|
2024-02-29T12:20:45Z
|
https://github.com/django-oscar/django-oscar/issues/4225
|
[] |
Hisham-Pak
| 1
|
google-research/bert
|
tensorflow
| 920
|
How to print learning_rate?
|
How to print `learning_rate` so that I could see it if decay.
Thank you!
|
closed
|
2019-11-15T07:21:51Z
|
2021-03-12T03:35:37Z
|
https://github.com/google-research/bert/issues/920
|
[] |
guotong1988
| 2
|
tqdm/tqdm
|
jupyter
| 1,523
|
AttributeError Exception under a console-less PyInstaller build
|
- [x] I have marked all applicable categories:
+ [x] exception-raising bug
+ [ ] visual output bug
- [x] I have visited the [source website], and in particular
read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
- [x] I have mentioned version numbers, operating system and
environment, where applicable:
```python
import tqdm, sys
print(tqdm.__version__, sys.version, sys.platform)
```
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
* * *
```
Environment: 4.66.1 3.11.6 (tags/v3.11.6:8b6ee5b, Oct 2 2023, 14:57:12) [MSC v.1935 64 bit (AMD64)] win32
```
It seems tqdm does not support PyInstaller under a console-less window (`-w` flag). It is supported if you allow PyInstaller to include and show the console (`-c` flag). This may or may not be exclusive to PyInstaller 6.x, but that's what I'm currently using.
To reproduce take the following script:
```python
from tqdm import tqdm
t = tqdm(total=100)
t.update(100)
print("Finished")
```
Run that in a normal Python interpreter environment and it runs just fine. However, freeze it with PyInstaller with `-w` flag (window-only flag) then it will fail. Freezing it with `-c` flag (include console flag) then it will work.
```
File "tqdm\std.py", line 1099, in __init__
File "tqdm\std.py", line 1348, in refresh
File "tqdm\std.py", line 1496, in display
File "tqdm\std.py", line 462, in print_status
File "tqdm\std.py", line 455, in fp_write
File "tqdm\utils.py", line 139, in __getattr__
AttributeError: 'NoneType' object has no attribute 'write'
```
The bug seems to occur when trying to print the status to `fp` where `fp` is None at this point:
https://github.com/tqdm/tqdm/blob/4c956c20b83be4312460fc0c4812eeb3fef5e7df/tqdm/std.py#L455
|
open
|
2023-10-13T19:42:40Z
|
2023-10-13T19:42:40Z
|
https://github.com/tqdm/tqdm/issues/1523
|
[] |
rlaphoenix
| 0
|
iperov/DeepFaceLab
|
machine-learning
| 5,613
|
Attribute Error Message Upon Installation
|
Trying to re-download deepfacelive and deep facelab and every time i try to run the batch file after unzip it gives be this error message “ AttributeError: module "cv2' has no attribute gapi
Press any key to continue”
i’ve tried deleting it and reinstalling but the same thing happens. tried restarting my computer and everything. what could be the issue?

|
open
|
2023-01-23T11:41:45Z
|
2023-06-08T23:19:26Z
|
https://github.com/iperov/DeepFaceLab/issues/5613
|
[] |
heyxur
| 1
|
bigscience-workshop/petals
|
nlp
| 250
|
Inference timeout on larger input prompts
|
### Currently using the chatbot example where session id is saved and inference occurs one token at a time:
```python
with models[model_name][1].inference_session(max_length=512) as sess:
print(f"Thread Start -> {threading.get_ident()}")
output[model_name] = ""
inputs = models[model_name][0](prompt, return_tensors="pt")["input_ids"].to(DEVICE)
n_input_tokens = inputs.shape[1]
done = False
while not done and not kill.is_set():
outputs = models[model_name][1].generate(
inputs,
max_new_tokens=1,
do_sample=True,
top_p=top_p,
temperature=temperature,
repetition_penalty=repetition_penalty,
session=sess
)
output[model_name] += models[model_name][0].decode(outputs[0, n_input_tokens:])
token_cnt += 1
print("\n["+ str(threading.get_ident()) + "]" + output[model_name], end="", flush=True)
for stop_word in stop:
stop_word = codecs.getdecoder("unicode_escape")(stop_word)[0]
if stop_word != '' and stop_word in output[model_name]:
print(f"\nDONE (stop) -> {threading.get_ident()}")
done = True
if flag or (token_cnt >= max_tokens):
print(f"\nDONE (max tokens) -> {threading.get_ident()}")
done = True
inputs = None # Prefix is passed only for the 1st token of the bot's response
n_input_tokens = 0
```
### When I pass in a small prompt, inference works:
**PROMPT**
> Please answer the following question:
> Question: What is the capital of Germany?
> Answer:
```
Berlin, Germany
```
### A slightly larger prompt always results in timeout errors:
**PROMPT**
> Given a pair of sentences, choose whether the two sentences agree (entailment)/disagree (contradiction) with each other.
> Possible labels: 1. entailment 2. contradiction
> Sentence 1: The skier was on the edge of the ramp. Sentence 2: The skier was dressed in winter clothes.
> Label: entailment
> Sentence 1: The boy skated down the staircase railing. Sentence 2: The boy is a newbie skater.
> Label: contradiction
> Sentence 1: Two middle-aged people stand by a golf hole. Sentence 2: A couple riding in a golf cart.
> Label:
```
Feb 03 16:16:37.377 [INFO] Peer 12D3KooWJALV7xRuHLzJHAftZhmSeqz68hywh1oK8oYmW844vWHt did not respond, banning it temporarily
Feb 03 16:16:37.377 [WARN] [/home/gene/dockerx/temp/petals/src/petals/client/inference_session.py.step:311] Caught exception when running inference from block 16 (retry in 0 sec): TimeoutError()
Feb 03 16:16:37.378 [WARN] [/home/gene/dockerx/temp/petals/src/petals/client/routing/sequence_manager.py.make_sequence:109] Remote SequenceManager is still searching for routes, waiting for it to become ready
Feb 03 16:17:10.908 [INFO] Peer 12D3KooWJALV7xRuHLzJHAftZhmSeqz68hywh1oK8oYmW844vWHt did not respond, banning it temporarily
Feb 03 16:17:10.908 [WARN] [/home/gene/dockerx/temp/petals/src/petals/client/inference_session.py.step:311] Caught exception when running inference from block 16 (retry in 1 sec): TimeoutError()
Feb 03 16:17:11.909 [WARN] [/home/gene/dockerx/temp/petals/src/petals/client/routing/sequence_manager.py.make_sequence:109] Remote SequenceManager is still searching for routes, waiting for it to become ready
```
|
closed
|
2023-02-04T00:22:26Z
|
2023-02-27T14:25:54Z
|
https://github.com/bigscience-workshop/petals/issues/250
|
[] |
gururise
| 6
|
ultralytics/yolov5
|
pytorch
| 12,468
|
How to analyze remote machine training results with Comet?
|
### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I'm trying to analyze my yolov5 training with Comet, generated in a HPC machine. After generated the api_key and the configuration file, I don't understand from the Comet UI how to pass the results to analyze from a remote machine.
Could you help me? Thanks.
### Additional
_No response_
|
closed
|
2023-12-05T10:14:54Z
|
2024-01-16T00:21:24Z
|
https://github.com/ultralytics/yolov5/issues/12468
|
[
"question",
"Stale"
] |
unrue
| 4
|
lukasmasuch/streamlit-pydantic
|
streamlit
| 68
|
The type of the following property is currently not supported: Multi Selection
|
### Checklist
- [X] I have searched the [existing issues](https://github.com/lukasmasuch/streamlit-pydantic/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provided sufficient information below to help reproduce this issue.
### Summary
In examples in https://st-pydantic.streamlit.app/, one example is `Complex Default`. When I ran the code in my own system I face the following issue:
The type of the following property is currently not supported: Multi Selection
What must I do?
### Reproducible Code Example
```Python
from enum import Enum
from typing import Set
import streamlit as st
from pydantic import BaseModel, Field
import streamlit_pydantic as sp
class OtherData(BaseModel):
text: str
integer: int
class SelectionValue(str, Enum):
FOO = "foo"
BAR = "bar"
class ExampleModel(BaseModel):
long_text: str = Field(
..., format="multi-line", description="Unlimited text property"
)
integer_in_range: int = Field(
20,
ge=10,
le=30,
multiple_of=2,
description="Number property with a limited range.",
)
single_selection: SelectionValue = Field(
..., description="Only select a single item from a set."
)
multi_selection: Set[SelectionValue] = Field(
..., description="Allows multiple items from a set."
)
read_only_text: str = Field(
"Lorem ipsum dolor sit amet",
description="This is a ready only text.",
readOnly=True,
)
single_object: OtherData = Field(
...,
description="Another object embedded into this model.",
)
data = sp.pydantic_form(key="my_form", model=ExampleModel)
if data:
st.json(data.model_dump_json())
```
### Steps To Reproduce
streamlit run `the code`
### Expected Behavior
ability to select values to create a set
### Current Behavior
```
The type of the following property is currently not supported: Multi Selection
```
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- streamlit-pydantic version:
0.6.0
- Python version:
3.10.12
pydantic==2.10.5
pydantic-settings==2.7.1
pydantic_core==2.27.2
streamlit-pydantic==0.6.0
### Additional Information
Thank you in advance
|
open
|
2025-01-10T18:34:15Z
|
2025-02-14T05:03:56Z
|
https://github.com/lukasmasuch/streamlit-pydantic/issues/68
|
[
"type:bug",
"status:needs-triage"
] |
alikaz3mi
| 1
|
ultralytics/yolov5
|
pytorch
| 12,990
|
No detections on custom data training
|
### Search before asking
- [ ] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I'm trying to train yolov5 with custom data. I'm using very few images just to test everything. I have 2 classes and 6 images (3 images for each class), which I know is WAY too little, but as I said, it's only for testing purposes. So I trained the model with the data which worked fine, but when I try to detect one of the images that was used for training, it doesn't work. Shouldn't it be able to detect the images I trained with, even if there are so few?
Command for training:
python train.py --img 2048 --batch 16 --epochs 5 --data test.yaml --weights yolov5s.pt --nosave --cache
Command for testing:
python detect.py --weights runs/train/exp11/weights/last.pt --source data/cartes_mini/images/2-C_jpg.rf.3ad5f752441ac8389c42afec2b5ecc10.jpg
I've also tried with another dataset that contained 1 class and around 20 training images, but got the same result.
Thank you!
### Additional
_No response_
|
closed
|
2024-05-08T16:47:38Z
|
2024-10-20T19:45:27Z
|
https://github.com/ultralytics/yolov5/issues/12990
|
[
"question",
"Stale"
] |
Just1813
| 3
|
sammchardy/python-binance
|
api
| 786
|
support coin margined future operations
|
currently, all the future operations are limited to usd margined futures, which is on /fapi/v1/. coin margined futures operations are on /dapi/v1/.
|
open
|
2021-04-23T07:16:59Z
|
2021-04-23T07:16:59Z
|
https://github.com/sammchardy/python-binance/issues/786
|
[] |
ZhiminHeGit
| 0
|
pytest-dev/pytest-xdist
|
pytest
| 515
|
INTERNALERROR> TypeError: unhashable type: 'ExceptionChainRepr'
|
Running pytest with `-n4` and `--doctest-modules` together with import errors leads to this misleading error which hides what the actual error is.
```
(pytest-debug) :~/PycharmProjects/scratch$ py.test --pyargs test123 --doctest-modules -n4
=================================================== test session starts ====================================================
platform linux -- Python 3.7.6, pytest-5.4.1.dev27+g3b48fce, py-1.8.1, pluggy-0.13.0
rootdir: ~/PycharmProjects/scratch
plugins: forked-1.1.3, xdist-1.31.1.dev1+g6fd5b56
gw0 ok / gw1 ok / gw2 C / gw3 CINTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "site-packages/pytest-5.4.1.dev27+g3b48fce-py3.7.egg/_pytest/main.py", line 191, in wrap_session
INTERNALERROR> session.exitstatus = doit(config, session) or 0
INTERNALERROR> File "site-packages/pytest-5.4.1.dev27+g3b48fce-py3.7.egg/_pytest/main.py", line 247, in _main
INTERNALERROR> config.hook.pytest_runtestloop(session=session)
INTERNALERROR> File "site-packages/pluggy/hooks.py", line 286, in __call__
INTERNALERROR> return self._hookexec(self, self.get_hookimpls(), kwargs)
INTERNALERROR> File "site-packages/pluggy/manager.py", line 92, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR> File "site-packages/pluggy/manager.py", line 86, in <lambda>
INTERNALERROR> firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
INTERNALERROR> File "site-packages/pluggy/callers.py", line 208, in _multicall
INTERNALERROR> return outcome.get_result()
INTERNALERROR> File "site-packages/pluggy/callers.py", line 80, in get_result
INTERNALERROR> raise ex[1].with_traceback(ex[2])
INTERNALERROR> File "site-packages/pluggy/callers.py", line 187, in _multicall
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> File "site-packages/pytest_xdist-1.31.1.dev1+g6fd5b56-py3.7.egg/xdist/dsession.py", line 112, in pytest_runtestloop
INTERNALERROR> self.loop_once()
INTERNALERROR> File "site-packages/pytest_xdist-1.31.1.dev1+g6fd5b56-py3.7.egg/xdist/dsession.py", line 135, in loop_once
INTERNALERROR> call(**kwargs)
INTERNALERROR> File "site-packages/pytest_xdist-1.31.1.dev1+g6fd5b56-py3.7.egg/xdist/dsession.py", line 272, in worker_collectreport
INTERNALERROR> self._failed_worker_collectreport(node, rep)
INTERNALERROR> File "site-packages/pytest_xdist-1.31.1.dev1+g6fd5b56-py3.7.egg/xdist/dsession.py", line 302, in _failed_worker_collectreport
INTERNALERROR> if rep.longrepr not in self._failed_collection_errors:
INTERNALERROR> TypeError: unhashable type: 'ExceptionChainRepr'
```
When run with just one of `-n4` and `--doctest-modules` its fine.
```
(pytest-debug) :~/PycharmProjects/scratch$ py.test --pyargs test123 -n4
=================================================== test session starts ====================================================
platform linux -- Python 3.7.6, pytest-5.4.1.dev27+g3b48fce, py-1.8.1, pluggy-0.13.0
rootdir: ~/PycharmProjects/scratch
plugins: forked-1.1.3, xdist-1.31.1.dev1+g6fd5b56
gw0 [0] / gw1 [0] / gw2 [0] / gw3 [0]
========================================================== ERRORS ==========================================================
_____________________________________ ERROR collecting test123/test/test_something.py ______________________________________
ImportError while importing test module '~/PycharmProjects/scratch/test123/test/test_something.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
test123/test/test_something.py:1: in <module>
import foo
E ModuleNotFoundError: No module named 'foo'
================================================= short test summary info ==================================================
ERROR test123/test/test_something.py
=
```
or
```
(pytest-debug) :~/PycharmProjects/scratch$ py.test --pyargs test123 --doctest-modules
=================================================== test session starts ====================================================
platform linux -- Python 3.7.6, pytest-5.4.1.dev27+g3b48fce, py-1.8.1, pluggy-0.13.0
rootdir: ~/PycharmProjects/scratch
plugins: forked-1.1.3, xdist-1.31.1.dev1+g6fd5b56
collected 0 items / 2 errors
========================================================== ERRORS ==========================================================
_____________________________________ ERROR collecting test123/test/test_something.py ______________________________________
test123/test/test_something.py:1: in <module>
import foo
E ModuleNotFoundError: No module named 'foo'
_____________________________________ ERROR collecting test123/test/test_something.py ______________________________________
ImportError while importing test module '~/PycharmProjects/scratch/test123/test/test_something.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
test123/test/test_something.py:1: in <module>
import foo
E ModuleNotFoundError: No module named 'foo'
================================================= short test summary info ==================================================
ERROR test123/test/test_something.py - ModuleNotFoundError: No module named 'foo'
ERROR test123/test/test_something.py
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 2 errors during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
==================================================== 2 errors in 0.06s =====================================================
```
Project is a minimal set of test files
```
~/PycharmProjects/scratch$ find test123 | grep -v __pycache__
test123
test123/test
test123/test/test_something.py
test123/__init__.py
```
and versions are
```
# packages in environment at /..../pytest-debug:
#
# Name Version Build Channel
_libgcc_mutex 0.1 conda_forge conda-forge
_openmp_mutex 4.5 0_gnu conda-forge
apipkg 1.5 pypi_0 pypi
attrs 19.3.0 py_0 conda-forge
ca-certificates 2019.11.28 hecc5488_0 conda-forge
certifi 2019.11.28 py37hc8dfbb8_1 conda-forge
execnet 1.7.1 pypi_0 pypi
importlib-metadata 1.5.2 py37hc8dfbb8_0 conda-forge
importlib_metadata 1.5.2 0 conda-forge
ld_impl_linux-64 2.34 h53a641e_0 conda-forge
libffi 3.2.1 he1b5a44_1007 conda-forge
libgcc-ng 9.2.0 h24d8f2e_2 conda-forge
libgomp 9.2.0 h24d8f2e_2 conda-forge
libstdcxx-ng 9.2.0 hdf63c60_2 conda-forge
more-itertools 8.2.0 py_0 conda-forge
ncurses 6.1 hf484d3e_1002 conda-forge
openssl 1.1.1e h516909a_0 conda-forge
packaging 20.1 py_0 conda-forge
pip 20.0.2 py_2 conda-forge
pluggy 0.13.0 py37_0 conda-forge
py 1.8.1 py_0 conda-forge
py-cpuinfo 5.0.0 py_0 conda-forge
pyparsing 2.4.6 py_0 conda-forge
pytest 5.4.1.dev27+g3b48fce pypi_0 pypi
pytest-forked 1.1.3 pypi_0 pypi
pytest-xdist 1.31.1.dev1+g6fd5b56.d20200326 pypi_0 pypi
python 3.7.6 h8356626_5_cpython conda-forge
python_abi 3.7 1_cp37m conda-forge
readline 8.0 hf8c457e_0 conda-forge
ripgrep 11.0.2 h516909a_3 conda-forge
setuptools 46.1.3 py37hc8dfbb8_0 conda-forge
six 1.14.0 py_1 conda-forge
sqlite 3.30.1 hcee41ef_0 conda-forge
tk 8.6.10 hed695b0_0 conda-forge
wcwidth 0.1.8 pyh9f0ad1d_1 conda-forge
wheel 0.34.2 py_1 conda-forge
xz 5.2.4 h516909a_1002 conda-forge
zipp 3.1.0 py_0 conda-forge
zlib 1.2.11 h516909a_1006 conda-forge
```
|
closed
|
2020-03-26T14:07:31Z
|
2020-05-13T13:32:39Z
|
https://github.com/pytest-dev/pytest-xdist/issues/515
|
[] |
lusewell
| 2
|
OthersideAI/self-operating-computer
|
automation
| 151
|
[BUG] Fedora system ,can't use it
|
Found a bug? Please fill out the sections below. 👍

My system Fedora 39
### Describe the bug
i can't use
Name: self-operating-computer
Version: 1.2.8
Summary:
Home-page:
Author:
Author-email:
License:
Location: /home/linlori/self-operating-computer/venv/lib64/python3.12/site-packages
Requires: aiohttp, annotated-types, anyio, certifi, charset-normalizer, colorama, contourpy, cycler, distro, easyocr, EasyProcess, entrypoint2, exceptiongroup, fonttools, google-generativeai, h11, httpcore, httpx, idna, importlib-resources, kiwisolver, matplotlib, MouseInfo, mss, numpy, openai, packaging, Pillow, prompt-toolkit, PyAutoGUI, pydantic, pydantic-core, PyGetWindow, PyMsgBox, pyparsing, pyperclip, PyRect, pyscreenshot, PyScreeze, python-dateutil, python-dotenv, python3-xlib, pytweening, requests, rubicon-objc, six, sniffio, tqdm, typing-extensions, ultralytics, urllib3, wcwidth, zipp
Required-by:
### Steps to Reproduce
1. went to "open the browser".
2. error as .
[Self-Operating Computer][Error] Something went wrong. Trying another method X get_image failed: error 8 (73, 0, 1028)
[Self-Operating Computer][Error] Something went wrong. Trying again X get_image failed: error 8 (73, 0, 1028)
[Self-Operating Computer][Error] -> cannot access local variable 'content' where it is not associated with a value
3. not working
### Expected Behavior
A brief description of what you expected to happen.
### Actual Behavior:
what actually happened.
### Environment
- OS: fedora 39
- GPT-4v:
- Framework Version (optional):
### Screenshots
If applicable, add screenshots to help explain your problem.
### Additional context
Add any other context about the problem here.
|
closed
|
2024-01-30T20:42:58Z
|
2024-03-25T15:51:44Z
|
https://github.com/OthersideAI/self-operating-computer/issues/151
|
[
"bug"
] |
Lori-Lin7
| 3
|
nltk/nltk
|
nlp
| 3,152
|
CategorizedMarkdownCorpusReader.sections() does not return final section of markdown
|
If you load a markdown corpus using CategorizedMarkdownCorpusReader, the object has a sections() method which returns a list of MarkdownSection objects. This function does not return the final section of *any* document I have tested.
Here is an example markdown, titled test.md
```
# Section One
This is a test section. The heading level (number of #s) does not impact this bug
# Section Two
This is a test section. The heading level (number of #s) does not impact this bug
```
Here is an example of code (needs an actual directory with the above markdown file), in which only the first section is returned.
```
from nltk.corpus.reader.markdown import CategorizedMarkdownCorpusReader
directory = "/some/path/here"
reader = CategorizedMarkdownCorpusReader(directory, r"\w\.md")
print(*[s.heading for s in reader.sections("test.md")], sep="\n")
print(len(reader.sections("test.md")))
```
This prints:
```
Section One
1
```
But should print:
```
Section One
Section Two
2
```
|
open
|
2023-05-08T18:53:21Z
|
2023-05-11T09:30:15Z
|
https://github.com/nltk/nltk/issues/3152
|
[] |
nkuehnle
| 1
|
dask/dask
|
scikit-learn
| 11,376
|
Slicing an array on the last chunk of an axis duplicates the number of chunks
|
<!-- Please include a self-contained copy-pastable example that generates the issue if possible.
Please be concise with code posted. See guidelines below on how to provide a good bug report:
- Craft Minimal Bug Reports http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports
- Minimal Complete Verifiable Examples https://stackoverflow.com/help/mcve
Bug reports that follow these guidelines are easier to diagnose, and so are often handled much more quickly.
-->
**Describe the issue**:
After updating to the version 2024.08.02 of Dask, I'm noticing that slicing a Dask Array on the last axis on the last positions duplicates the number of chunks with certain configurations like the one on the example, is this something expected? from my perspective it should keep the same chunk shape.
**Minimal Complete Verifiable Example**:
```python
import dask.array as da
da.zeros(
shape=(6, 7),
chunks=(1, 2)
)[:, 5:]
# The chunks would have the following shape (1, 1)
da.zeros(
shape=(6, 7),
chunks=(1, 2)
)[:, [5, 6]]
# The chunks would have the following shape (1, 2)
```


**Anything else we need to know?**:
**Environment**:
- Dask version: 2024.08.02
- Python version: 3.11
- Operating System: Windows 11
- Install method (conda, pip, source): pip
|
closed
|
2024-09-06T01:07:30Z
|
2024-09-06T01:44:39Z
|
https://github.com/dask/dask/issues/11376
|
[
"needs triage"
] |
josephnowak
| 3
|
Sanster/IOPaint
|
pytorch
| 264
|
A question about the custom model ؟
|
What should we do to add custom model to Google COLAB?
What should we do to add xformers to Google COLAB?
|
open
|
2023-04-03T17:14:05Z
|
2023-04-03T23:15:34Z
|
https://github.com/Sanster/IOPaint/issues/264
|
[] |
kingal2000
| 0
|
unit8co/darts
|
data-science
| 2,517
|
[BUG] TFTModel predicts nan values when MapeLoss function is used
|
**Describe the bug**
When MapeLoss is used as loss function with a TFTModel (loss_fn parameter), the output of the training shows val_loss and train_loss = 0:
```python
from darts.utils.losses import MapeLoss
model = TFTModel(
...
loss_fn=MapeLoss(),
...
)
```
```
Epoch 4: 100%
1/1 [00:00<00:00, 11.02it/s, train_loss=0.000, val_loss=0.000]
```
Then, when we try to get some predictions with that model, prediction method returns an array of nan values:
```python
array([[[nan]],
[[nan]],
[[nan]]]
```
There is no issue when any other loss function (e.g MSELoss) is used.
**To Reproduce**
It can be reproduced with the following code. Dataset is also attached: [input_example.csv](https://github.com/user-attachments/files/16816003/input_example.csv)
```python
import pandas as pd
import torch
from pytorch_lightning.callbacks import Callback, EarlyStopping
from darts import TimeSeries
from darts.models import TFTModel
from darts.utils.losses import MapeLoss
from torch.nn import MSELoss
# Retrieve target series
df = pd.read_csv('input_example.csv')
s = TimeSeries.from_dataframe(df, 'date', 'target')
test = s[-3:]
val = s[-18:-3]
train = s[:-18]
# Build and train the model
early_stopper = EarlyStopping("val_loss", min_delta=0.001, patience=10, verbose=True)
callbacks = [early_stopper]
model = TFTModel(
input_chunk_length=12,
output_chunk_length=3,
batch_size=64,
n_epochs=5,
add_relative_index=True,
add_encoders=None,
loss_fn=MapeLoss(), # MapeLoss(),# MSELoss(),
likelihood=None,
random_state=42,
pl_trainer_kwargs={"accelerator": "gpu", "devices": [0], "callbacks": callbacks},
save_checkpoints=True,
model_name="my_model",
force_reset=True
)
model.fit(series=train,val_series=val,verbose=True)
best_model = model.load_from_checkpoint(model_name="my_model", best=True, work_dir='darts_logs')
best_model.predict(n=3, num_samples=1, series=train.append(val))
```
**Expected behavior**
Prediction output should be an array of float values, and not an array of nans.
**System (please complete the following information):**
- Python version: 3.11.8
- darts version 0.30.0
**Additional context**
I've tried to understand where the nan values are coming from. I've modified MapeLoss (https://github.com/unit8co/darts/blob/master/darts/utils/losses.py#L96) to print the values of the two parameters:
```python
def forward(self, inpt, tgt):
print(f'TGT: {tgt}')
print(f'INPT: {inpt}')
return torch.mean(torch.abs(_divide_no_nan(tgt - inpt, tgt)))
```
It seems that from the second method call onwards, INPT parameter comes with an array of nan.
|
closed
|
2024-08-30T11:15:08Z
|
2024-10-16T08:06:33Z
|
https://github.com/unit8co/darts/issues/2517
|
[
"question"
] |
akepa
| 3
|
fastapi-users/fastapi-users
|
fastapi
| 1,224
|
Unable to install library on Mac M1
|
## Describe the bug
I'm trying to install the library in a virtualenv with the command
```
pip install 'fastapi-users[sqlalchemy]'
```
but I receive the following error
```
Collecting fastapi-users[sqlalchemy]
Using cached fastapi_users-11.0.0-py3-none-any.whl (38 kB)
Requirement already satisfied: email-validator<2.1,>=1.1.0 in ./venv/lib/python3.9/site-packages (from fastapi-users[sqlalchemy]) (2.0.0.post2)
Requirement already satisfied: makefun<2.0.0,>=1.11.2 in ./venv/lib/python3.9/site-packages (from fastapi-users[sqlalchemy]) (1.15.1)
Requirement already satisfied: python-multipart==0.0.6 in ./venv/lib/python3.9/site-packages (from fastapi-users[sqlalchemy]) (0.0.6)
Requirement already satisfied: fastapi>=0.65.2 in ./venv/lib/python3.9/site-packages (from fastapi-users[sqlalchemy]) (0.96.0)
Requirement already satisfied: passlib[bcrypt]==1.7.4 in ./venv/lib/python3.9/site-packages (from fastapi-users[sqlalchemy]) (1.7.4)
Requirement already satisfied: pyjwt[crypto]==2.6.0 in ./venv/lib/python3.9/site-packages (from fastapi-users[sqlalchemy]) (2.6.0)
Collecting fastapi-users-db-sqlalchemy>=4.0.0
Using cached fastapi_users_db_sqlalchemy-5.0.0-py3-none-any.whl (6.9 kB)
Requirement already satisfied: bcrypt>=3.1.0 in ./venv/lib/python3.9/site-packages (from passlib[bcrypt]==1.7.4->fastapi-users[sqlalchemy]) (4.0.1)
Requirement already satisfied: cryptography>=3.4.0 in ./venv/lib/python3.9/site-packages (from pyjwt[crypto]==2.6.0->fastapi-users[sqlalchemy]) (41.0.1)
Requirement already satisfied: idna>=2.0.0 in ./venv/lib/python3.9/site-packages (from email-validator<2.1,>=1.1.0->fastapi-users[sqlalchemy]) (3.4)
Requirement already satisfied: dnspython>=2.0.0 in ./venv/lib/python3.9/site-packages (from email-validator<2.1,>=1.1.0->fastapi-users[sqlalchemy]) (2.3.0)
Requirement already satisfied: starlette<0.28.0,>=0.27.0 in ./venv/lib/python3.9/site-packages (from fastapi>=0.65.2->fastapi-users[sqlalchemy]) (0.27.0)
Requirement already satisfied: pydantic!=1.7,!=1.7.1,!=1.7.2,!=1.7.3,!=1.8,!=1.8.1,<2.0.0,>=1.6.2 in ./venv/lib/python3.9/site-packages (from fastapi>=0.65.2->fastapi-users[sqlalchemy]) (1.10.9)
Requirement already satisfied: sqlalchemy[asyncio]<2.1.0,>=2.0.0 in ./venv/lib/python3.9/site-packages (from fastapi-users-db-sqlalchemy>=4.0.0->fastapi-users[sqlalchemy]) (2.0.15)
Requirement already satisfied: cffi>=1.12 in ./venv/lib/python3.9/site-packages (from cryptography>=3.4.0->pyjwt[crypto]==2.6.0->fastapi-users[sqlalchemy]) (1.15.1)
Requirement already satisfied: typing-extensions>=4.2.0 in ./venv/lib/python3.9/site-packages (from pydantic!=1.7,!=1.7.1,!=1.7.2,!=1.7.3,!=1.8,!=1.8.1,<2.0.0,>=1.6.2->fastapi>=0.65.2->fastapi-users[sqlalchemy]) (4.6.3)
Collecting greenlet!=0.4.17
Using cached greenlet-2.0.2.tar.gz (164 kB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: anyio<5,>=3.4.0 in ./venv/lib/python3.9/site-packages (from starlette<0.28.0,>=0.27.0->fastapi>=0.65.2->fastapi-users[sqlalchemy]) (3.7.0)
Requirement already satisfied: exceptiongroup in ./venv/lib/python3.9/site-packages (from anyio<5,>=3.4.0->starlette<0.28.0,>=0.27.0->fastapi>=0.65.2->fastapi-users[sqlalchemy]) (1.1.1)
Requirement already satisfied: sniffio>=1.1 in ./venv/lib/python3.9/site-packages (from anyio<5,>=3.4.0->starlette<0.28.0,>=0.27.0->fastapi>=0.65.2->fastapi-users[sqlalchemy]) (1.3.0)
Requirement already satisfied: pycparser in ./venv/lib/python3.9/site-packages (from cffi>=1.12->cryptography>=3.4.0->pyjwt[crypto]==2.6.0->fastapi-users[sqlalchemy]) (2.21)
Building wheels for collected packages: greenlet
Building wheel for greenlet (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [98 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build/lib.macosx-10.9-universal2-cpython-39
creating build/lib.macosx-10.9-universal2-cpython-39/greenlet
copying src/greenlet/__init__.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet
creating build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/__init__.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
creating build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests
copying src/greenlet/tests/test_version.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests
copying src/greenlet/tests/test_weakref.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests
copying src/greenlet/tests/test_gc.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests
copying src/greenlet/tests/leakcheck.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests
copying src/greenlet/tests/test_generator.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests
copying src/greenlet/tests/test_greenlet_trash.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests
copying src/greenlet/tests/test_throw.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests
copying src/greenlet/tests/test_tracing.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests
copying src/greenlet/tests/test_cpp.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests
copying src/greenlet/tests/test_contextvars.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests
copying src/greenlet/tests/test_greenlet.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests
copying src/greenlet/tests/test_extension_interface.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests
copying src/greenlet/tests/__init__.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests
copying src/greenlet/tests/test_generator_nested.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests
copying src/greenlet/tests/test_stack_saved.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests
copying src/greenlet/tests/test_leaks.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests
running egg_info
writing src/greenlet.egg-info/PKG-INFO
writing dependency_links to src/greenlet.egg-info/dependency_links.txt
writing requirements to src/greenlet.egg-info/requires.txt
writing top-level names to src/greenlet.egg-info/top_level.txt
reading manifest file 'src/greenlet.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no previously-included files found matching 'benchmarks/*.json'
no previously-included directories found matching 'docs/_build'
warning: no files found matching '*.py' under directory 'appveyor'
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '*.pyd' found anywhere in distribution
warning: no previously-included files matching '*.so' found anywhere in distribution
warning: no previously-included files matching '.coverage' found anywhere in distribution
adding license file 'LICENSE'
adding license file 'LICENSE.PSF'
adding license file 'AUTHORS'
writing manifest file 'src/greenlet.egg-info/SOURCES.txt'
copying src/greenlet/greenlet.cpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet
copying src/greenlet/greenlet.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet
copying src/greenlet/greenlet_allocator.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet
copying src/greenlet/greenlet_compiler_compat.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet
copying src/greenlet/greenlet_cpython_compat.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet
copying src/greenlet/greenlet_exceptions.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet
copying src/greenlet/greenlet_greenlet.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet
copying src/greenlet/greenlet_internal.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet
copying src/greenlet/greenlet_refs.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet
copying src/greenlet/greenlet_slp_switch.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet
copying src/greenlet/greenlet_thread_state.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet
copying src/greenlet/greenlet_thread_state_dict_cleanup.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet
copying src/greenlet/greenlet_thread_support.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet
copying src/greenlet/slp_platformselect.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet
copying src/greenlet/platform/setup_switch_x64_masm.cmd -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_aarch64_gcc.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_alpha_unix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_amd64_unix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_arm32_gcc.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_arm32_ios.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_arm64_masm.asm -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_arm64_masm.obj -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_arm64_msvc.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_csky_gcc.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_m68k_gcc.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_mips_unix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_ppc64_aix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_ppc64_linux.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_ppc_aix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_ppc_linux.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_ppc_macosx.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_ppc_unix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_riscv_unix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_s390_unix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_sparc_sun_gcc.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_x32_unix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_x64_masm.asm -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_x64_masm.obj -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_x64_msvc.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_x86_msvc.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_x86_unix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/tests/_test_extension.c -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests
copying src/greenlet/tests/_test_extension_cpp.cpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests
running build_ext
building 'greenlet._greenlet' extension
creating build/temp.macosx-10.9-universal2-cpython-39
creating build/temp.macosx-10.9-universal2-cpython-39/src
creating build/temp.macosx-10.9-universal2-cpython-39/src/greenlet
clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -iwithsysroot/System/Library/Frameworks/System.framework/PrivateHeaders -iwithsysroot/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.9/Headers -arch arm64 -arch x86_64 -Werror=implicit-function-declaration -Wno-error=unreachable-code -I/Users/michelegrifa/dev/PycharmProjects/Backend-Brigros-B2B/venv/include -I/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.9/Headers -c src/greenlet/greenlet.cpp -o build/temp.macosx-10.9-universal2-cpython-39/src/greenlet/greenlet.o --std=gnu++11
src/greenlet/greenlet.cpp:16:10: fatal error: 'Python.h' file not found
#include <Python.h>
^~~~~~~~~~
1 error generated.
error: command '/usr/bin/clang' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for greenlet
Running setup.py clean for greenlet
Failed to build greenlet
Installing collected packages: greenlet, fastapi-users, fastapi-users-db-sqlalchemy
Running setup.py install for greenlet ... error
error: subprocess-exited-with-error
× Running setup.py install for greenlet did not run successfully.
│ exit code: 1
╰─> [100 lines of output]
running install
/Users/michelegrifa/dev/PycharmProjects/Backend-Brigros-B2B/venv/lib/python3.9/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
running build
running build_py
creating build
creating build/lib.macosx-10.9-universal2-cpython-39
creating build/lib.macosx-10.9-universal2-cpython-39/greenlet
copying src/greenlet/__init__.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet
creating build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/__init__.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
creating build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests
copying src/greenlet/tests/test_version.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests
copying src/greenlet/tests/test_weakref.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests
copying src/greenlet/tests/test_gc.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests
copying src/greenlet/tests/leakcheck.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests
copying src/greenlet/tests/test_generator.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests
copying src/greenlet/tests/test_greenlet_trash.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests
copying src/greenlet/tests/test_throw.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests
copying src/greenlet/tests/test_tracing.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests
copying src/greenlet/tests/test_cpp.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests
copying src/greenlet/tests/test_contextvars.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests
copying src/greenlet/tests/test_greenlet.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests
copying src/greenlet/tests/test_extension_interface.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests
copying src/greenlet/tests/__init__.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests
copying src/greenlet/tests/test_generator_nested.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests
copying src/greenlet/tests/test_stack_saved.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests
copying src/greenlet/tests/test_leaks.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests
running egg_info
writing src/greenlet.egg-info/PKG-INFO
writing dependency_links to src/greenlet.egg-info/dependency_links.txt
writing requirements to src/greenlet.egg-info/requires.txt
writing top-level names to src/greenlet.egg-info/top_level.txt
reading manifest file 'src/greenlet.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no previously-included files found matching 'benchmarks/*.json'
no previously-included directories found matching 'docs/_build'
warning: no files found matching '*.py' under directory 'appveyor'
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '*.pyd' found anywhere in distribution
warning: no previously-included files matching '*.so' found anywhere in distribution
warning: no previously-included files matching '.coverage' found anywhere in distribution
adding license file 'LICENSE'
adding license file 'LICENSE.PSF'
adding license file 'AUTHORS'
writing manifest file 'src/greenlet.egg-info/SOURCES.txt'
copying src/greenlet/greenlet.cpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet
copying src/greenlet/greenlet.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet
copying src/greenlet/greenlet_allocator.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet
copying src/greenlet/greenlet_compiler_compat.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet
copying src/greenlet/greenlet_cpython_compat.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet
copying src/greenlet/greenlet_exceptions.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet
copying src/greenlet/greenlet_greenlet.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet
copying src/greenlet/greenlet_internal.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet
copying src/greenlet/greenlet_refs.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet
copying src/greenlet/greenlet_slp_switch.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet
copying src/greenlet/greenlet_thread_state.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet
copying src/greenlet/greenlet_thread_state_dict_cleanup.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet
copying src/greenlet/greenlet_thread_support.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet
copying src/greenlet/slp_platformselect.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet
copying src/greenlet/platform/setup_switch_x64_masm.cmd -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_aarch64_gcc.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_alpha_unix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_amd64_unix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_arm32_gcc.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_arm32_ios.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_arm64_masm.asm -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_arm64_masm.obj -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_arm64_msvc.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_csky_gcc.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_m68k_gcc.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_mips_unix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_ppc64_aix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_ppc64_linux.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_ppc_aix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_ppc_linux.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_ppc_macosx.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_ppc_unix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_riscv_unix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_s390_unix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_sparc_sun_gcc.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_x32_unix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_x64_masm.asm -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_x64_masm.obj -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_x64_msvc.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_x86_msvc.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/platform/switch_x86_unix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform
copying src/greenlet/tests/_test_extension.c -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests
copying src/greenlet/tests/_test_extension_cpp.cpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests
running build_ext
building 'greenlet._greenlet' extension
creating build/temp.macosx-10.9-universal2-cpython-39
creating build/temp.macosx-10.9-universal2-cpython-39/src
creating build/temp.macosx-10.9-universal2-cpython-39/src/greenlet
clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -iwithsysroot/System/Library/Frameworks/System.framework/PrivateHeaders -iwithsysroot/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.9/Headers -arch arm64 -arch x86_64 -Werror=implicit-function-declaration -Wno-error=unreachable-code -I/Users/michelegrifa/dev/PycharmProjects/Backend-Brigros-B2B/venv/include -I/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.9/Headers -c src/greenlet/greenlet.cpp -o build/temp.macosx-10.9-universal2-cpython-39/src/greenlet/greenlet.o --std=gnu++11
src/greenlet/greenlet.cpp:16:10: fatal error: 'Python.h' file not found
#include <Python.h>
^~~~~~~~~~
1 error generated.
error: command '/usr/bin/clang' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> greenlet
note: This is an issue with the package mentioned above, not pip.
```
## To Reproduce
Steps to reproduce the behavior:
1. Try to instal the library on a Mac M1
## Expected behavior
A clear and concise description of what you expected to happen.
## Configuration
- Python version : 3.9
- FastAPI version : 0.96.0
- FastAPI Users version :
### FastAPI Users configuration
## Additional context
|
closed
|
2023-06-08T09:37:59Z
|
2023-06-08T11:20:19Z
|
https://github.com/fastapi-users/fastapi-users/issues/1224
|
[
"bug"
] |
michele-grifa
| 0
|
dmlc/gluon-cv
|
computer-vision
| 1,095
|
python 3.6 fails to build gluon-cv: 'ascii' codec can't decode byte 0xf0
|
```
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "setup.py", line 45, in <module>
long_description = open('README.md').read()
File "/usr/local/lib/python3.6/encodings/ascii.py", line 26, in decode
return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xf0 in position 6475: ordinal not in range(128)
```
|
closed
|
2019-12-15T20:18:32Z
|
2021-06-07T07:04:35Z
|
https://github.com/dmlc/gluon-cv/issues/1095
|
[
"Stale"
] |
yurivict
| 8
|
encode/httpx
|
asyncio
| 1,772
|
Add HTTPX to the Python documentation.
|
There are a couple of places in the Python documentation where `requests` is recommended for HTTP requests.
I think that we've done enough work on HTTPX now that we ought to look at getting `httpx` included here, or at *least* open the dialog up. Getting up into the Python docs for 3.10 onwards would seem to be a pretty reasonable aim.
* https://docs.python.org/3/library/http.client.html
* https://docs.python.org/3/library/urllib.request.html
If having a 1.0 release is a blocker to us getting into the docs, then lets make sure we've got a pull request ready to go into the docs, have discussed any wording options through with the Python maintainers, and are aware of any deadlines we'd need to meet in order to get the docs change live. (Eg. is there a deadline if we wanted to be in the docs by the time 3.10 is released fully?)
I'm not super familiar with the process for submitting changes to Python or the Python docs.
|
closed
|
2021-07-26T10:41:21Z
|
2022-03-14T19:58:20Z
|
https://github.com/encode/httpx/issues/1772
|
[
"help wanted",
"docs",
"wontfix"
] |
tomchristie
| 4
|
iperov/DeepFaceLab
|
machine-learning
| 947
|
ValueError: cannot convert float NaN to integer
|
Hi, i got this
File "D:\Fakeapp\DeepFaceLab_OpenCL_SSE\_internal\DeepFaceLab\main.py", line 302, in <module>
arguments.func(arguments)
File "D:\Fakeapp\DeepFaceLab_OpenCL_SSE\_internal\DeepFaceLab\main.py", line 162, in process_train
Trainer.main(args, device_args)
File "D:\Fakeapp\DeepFaceLab_OpenCL_SSE\_internal\DeepFaceLab\mainscripts\Trainer.py", line 288, in main
lh_img = models.ModelBase.get_loss_history_preview(loss_history_to_show, iter, w, c)
File "D:\Fakeapp\DeepFaceLab_OpenCL_SSE\_internal\DeepFaceLab\models\ModelBase.py", line 649, in get_loss_history_preview
ph_max = int ( (plist_max[col][p] / plist_abs_max) * (lh_height-1) )
ValueError: cannot convert float NaN to integer
I spend long time to trane the model and i really dont want to lose the progress. What is it?
|
open
|
2020-11-12T13:26:33Z
|
2023-06-08T21:36:25Z
|
https://github.com/iperov/DeepFaceLab/issues/947
|
[] |
19DMT87
| 1
|
allenai/allennlp
|
data-science
| 4,864
|
Move ModelCard and TaskCard abstractions to the main repository
|
closed
|
2020-12-14T21:51:22Z
|
2020-12-22T22:00:01Z
|
https://github.com/allenai/allennlp/issues/4864
|
[] |
AkshitaB
| 0
|
|
sunscrapers/djoser
|
rest-api
| 276
|
/auth/me/ not working on production
|
In my local Django server, I can retrieve the user from /auth/me/ without any problem. But in production I recieve 401 (Unauthorized) error.
Do I need some special setting server-side to make it work properly? Thank you
|
closed
|
2018-04-18T16:10:00Z
|
2018-04-23T23:11:57Z
|
https://github.com/sunscrapers/djoser/issues/276
|
[] |
Zeioth
| 1
|
HIT-SCIR/ltp
|
nlp
| 500
|
SRL示例问题
|
closed
|
2021-04-08T06:50:27Z
|
2021-04-08T08:33:44Z
|
https://github.com/HIT-SCIR/ltp/issues/500
|
[] |
xinyubai1209
| 2
|
|
vitalik/django-ninja
|
django
| 769
|
[BUG] The latest Django-ninja (0.22.1) doesn't support the latest Django-ninja (0.18.8)
|
The latest Django-ninja (0.22.1) doesn't support the latest Django-ninja (0.18.8) in connection with Poetry.
- Python version: 3.11.3
- Django version: 4.2
- Django-Ninja version: 0.22.1
--------------------------------------------------------------------------------------------------------------
Poetry output
(app1 backend-py3.11) PS C:\git\app1> poetry update
Updating dependencies
Resolving dependencies...
Because django-ninja-extra (0.18.8) depends on django-ninja (0.21.0)
and no versions of django-ninja-extra match >0.18.8,<0.19.0, django-ninja-extra (>=0.18.8,<0.19.0) requires django-ninja (0.21.0).
So, because app1 backend depends on both django-ninja (^0.22) and django-ninja-extra (^0.18.8), version solving failed.
|
closed
|
2023-05-29T10:59:44Z
|
2023-05-29T20:50:22Z
|
https://github.com/vitalik/django-ninja/issues/769
|
[] |
jankrnavek
| 2
|
alteryx/featuretools
|
scikit-learn
| 2,021
|
Binary comparison primitives fail with Ordinal input if scalar value is not in the order values
|
The following primitives can fail when the input is an Ordinal column and the scalar value used for comparison is not one of the ordinal order values:
- `GreaterThanScalar`
- `GreaterThanEqualToScalar`
- `LessThanScalar`
- `LessThanEqualToScalar`
The primitive functions should be updated to confirm the value is present in the categorical values if the input is Ordinal. If the value is not present `nan` should be returned.
#### Code Sample, a copy-pastable example to reproduce your bug.
```python
import featuretools as ft
from featuretools.tests.testing_utils import make_ecommerce_entityset
from featuretools.primitives import GreaterThanScalar
es = make_ecommerce_entityset()
invalid_feature = ft.Feature(es['customers'].ww['engagement_level'], primitive=GreaterThanScalar(value=10))
fm = ft.calculate_feature_matrix(entityset=es, features=[invalid_feature])
```
```
TypeError: Invalid comparison between dtype=category and int
```
|
closed
|
2022-04-19T19:33:08Z
|
2022-04-20T22:33:39Z
|
https://github.com/alteryx/featuretools/issues/2021
|
[
"bug"
] |
thehomebrewnerd
| 0
|
deeppavlov/DeepPavlov
|
nlp
| 1,462
|
levenshtein searcher works incorrectly
|
**DeepPavlov version** (you can look it up by running `pip show deeppavlov`): 0.15.0
**Python version**: 3.7.10
**Operating system** (ubuntu linux, windows, ...): ubuntu 20.04
**Issue**: Levenshtein searcher does not take symbols replacements into account
**Command that led to error**:
```
>>> from deeppavlov.models.spelling_correction.levenshtein.searcher_component import LevenshteinSearcher
>>> vocab = set("the cat sat on the mat".split())
>>> abet = set(c for w in vocab for c in w)
>>> searcher = LevenshteinSearcher(abet, vocab)
>>> 'cat' in searcher
True
>>> searcher.transducer.distance('rat', 'cat'),\
... searcher.transducer.distance('cat', 'rat')
(inf, inf)
>>> searcher.transducer.distance('at', 'cat')
1.0
>>> searcher.transducer.distance('rat', 'cat')
inf
>>> searcher.search("rat", 1)
[]
>>> searcher.search("at", 1)
[('mat', 1.0), ('cat', 1.0), ('sat', 1.0)]
```
**Expected output**:
```
>>> searcher.transducer.distance('rat', 'cat')
1.0
>>> searcher.search("rat", 1)
[('mat', 1.0), ('cat', 1.0), ('sat', 1.0)]
```
|
closed
|
2021-07-04T21:39:27Z
|
2023-07-10T11:45:40Z
|
https://github.com/deeppavlov/DeepPavlov/issues/1462
|
[
"bug"
] |
oserikov
| 1
|
agronholm/anyio
|
asyncio
| 500
|
Creating Event when in from_thread.run_sync() context
|
When using `to_thread.run_sync` and then calling back again to the event loop with `from_thread.run_sync` I can't create an `anyio.Event` since sniffio fails to detect the running loop (no current_task is set in asyncio which of course is kindof true).
Reproduced in a gist here https://gist.github.com/tapetersen/e85a808f0bba7dd396fa68eafb41012d
|
closed
|
2022-11-14T16:53:56Z
|
2022-11-16T09:21:24Z
|
https://github.com/agronholm/anyio/issues/500
|
[] |
tapetersen
| 1
|
apache/airflow
|
python
| 47,452
|
Rename DAG (from models/dag) to SchedulerDAG -- or remove it!
|
### Body
It is easily confused with the DAG in task sdk.
Ideally we can remove. At least rename it.
comment from ash
> That should be renamed as SchedulerDAG really -- (or possibly it can be removed now) -- it's a) a compat import while that part isn't complete, and b) where scheduler specific DAG functions go
### Committer
- [x] I acknowledge that I am a maintainer/committer of the Apache Airflow project.
|
open
|
2025-03-06T15:01:54Z
|
2025-03-06T15:04:11Z
|
https://github.com/apache/airflow/issues/47452
|
[
"area:Scheduler",
"kind:meta"
] |
dstandish
| 0
|
plotly/dash
|
jupyter
| 2,712
|
dash_duo.get_logs() returns None instead of an empty list
|
Thank you so much for helping improve the quality of Dash!
We do our best to catch bugs during the release process, but we rely on your help to find the ones that slip through.
**Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
- replace the result of `pip list | grep dash` below
```
dash 2.14.2
dash-auth 2.0.0
dash-bootstrap-components 1.5.0
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
dash-testing-stub 0.0.2
jupyter-dash 0.4.2
```
- if frontend related, tell us your Browser, Version and OS
- OS: OSX
- Browser chromedriver 120
**Describe the bug**
Tests such as this one: https://github.com/oegedijk/explainerdashboard/blob/3ae3fe6488cac234512014799895fff562143395/tests/integration_tests/test_dashboards.py#L8
on explainerdashboard are failing due to dash_duo.get_logs() not returning an empty list as expected but instead returning None. Both locally and in the github actions CI.
```
def test_classification_dashboard(dash_duo, precalculated_rf_classifier_explainer):
db = ExplainerDashboard(
precalculated_rf_classifier_explainer, title="testing", responsive=False
)
html = db.to_html()
assert html.startswith(
"\n<!DOCTYPE html>\n<html"
), "failed to generate dashboard to_html"
dash_duo.start_server(db.app)
dash_duo.wait_for_text_to_equal("h1", "testing", timeout=30)
> assert dash_duo.get_logs() == [], "browser console should contain no error"
E AssertionError: browser console should contain no error
E assert None == []
E + where None = <bound method Browser.get_logs of <dash.testing.composite.DashComposite object at 0x13dafd660>>()
E + where <bound method Browser.get_logs of <dash.testing.composite.DashComposite object at 0x13dafd660>> = <dash.testing.composite.DashComposite object at 0x13dafd660>.get_logs
```
**Expected behavior**
Should return [] as before.
|
open
|
2023-12-17T12:26:42Z
|
2024-08-13T19:44:06Z
|
https://github.com/plotly/dash/issues/2712
|
[
"bug",
"P3"
] |
oegedijk
| 1
|
jupyterhub/zero-to-jupyterhub-k8s
|
jupyter
| 3,597
|
Replace merge with rebase for cleaner history
|
I'm suggesting to use rebase instead of merge when incorporating changes from one branch to another.
Rebase helps maintain a cleaner, linear commit history by replaying the commits from the current branch on top of the target branch. This approach avoids the creation of unnecessary merge commits and ensures a more streamlined history, making it easier to track changes and manage the repository.
To achieve this, it's only necessary to change one config in the repository settings.
|
closed
|
2025-01-12T15:29:10Z
|
2025-01-13T14:06:22Z
|
https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/3597
|
[
"maintenance"
] |
samyuh
| 3
|
saulpw/visidata
|
pandas
| 1,910
|
[loaders] Support builtin data formats
|
This is meant to be an aspirational wishlist request. For developers who would like to contribute, this can be used as a map to what loaders could be added.
In https://github.com/saulpw/visidata/issues/1587#issuecomment-1369263933 it was suggested that:
> When Python includes the batteries, VisiData should as well.
This was within the context of talking about the new toml module data format. If Python supports a data format it would be reasonable to expect that VisiData also supports that data format.
So I asked a generative chatbot what data formats the Python Standard Library module supports. Below is the answer, some of these may work better as column formats than file formats. And it likely missed some formats.
There are some formats that are deprecated as described in [PEP 594] that do not make sense to include since they will eventually be removed from Python.
Other formats are useful as building blocks for other formats, like the email parsers can be used for email, but are also useful for things like Python METADATA files (e.g. [Source distribution format])
[Source distribution format]: https://packaging.python.org/en/latest/specifications/source-distribution-format/)
[PEP 594]: https://peps.python.org/pep-0594
| Format | Module | Description | Documentation |
|--------|--------|-------------|---------------|
| CSV | csv | Comma Separated Values | [CSV Module Documentation](https://docs.python.org/3/library/csv.html) |
| JSON | json | JavaScript Object Notation | [JSON Module Documentation](https://docs.python.org/3/library/json.html) |
| XML | xml.etree.ElementTree | eXtensible Markup Language | [XML Etree Module Documentation](https://docs.python.org/3/library/xml.etree.elementtree.html) |
| HTML | html | Hypertext Markup Language | [HTML Module Documentation](https://docs.python.org/3/library/html.html) |
| INI | configparser | Configuration file format | [ConfigParser Module Documentation](https://docs.python.org/3/library/configparser.html) |
| Pickle | pickle | Python object serialization | [Pickle Module Documentation](https://docs.python.org/3/library/pickle.html) |
| ZIP | zipfile | Archive file format | [Zipfile Module Documentation](https://docs.python.org/3/library/zipfile.html) |
| TAR | tarfile | Archive file format | [Tarfile Module Documentation](https://docs.python.org/3/library/tarfile.html) |
| GZIP | gzip | Compression format | [Gzip Module Documentation](https://docs.python.org/3/library/gzip.html) |
| BZ2 | bz2 | Compression format | [BZ2 Module Documentation](https://docs.python.org/3/library/bz2.html) |
| LZMA | lzma | Compression format | [LZMA Module Documentation](https://docs.python.org/3/library/lzma.html) |
| Base64 | base64 | Binary-to-text encoding | [Base64 Module Documentation](https://docs.python.org/3/library/base64.html) |
| Structured Binary files | struct | Binary data manipulation | [Struct Module Documentation](https://docs.python.org/3/library/struct.html) |
| Email | email | Email message format | [Email Module Documentation](https://docs.python.org/3/library/email.html) |
| MIME | email.mime | Multipurpose Internet Mail Extensions | [MIME Module Documentation](https://docs.python.org/3/library/email.mime.html) |
| Python Code | ast | Abstract Syntax Trees | [AST Module Documentation](https://docs.python.org/3/library/ast.html) |
| Python Code | dis | Python byte code disassembler | [Dis Module Documentation](https://docs.python.org/3/library/dis.html) |
| Python Code | tokenize | Tokenizer for Python source | [Tokenize Module Documentation](https://docs.python.org/3/library/tokenize.html) |
| WAV | wave | WAV audio file format | [Wave Module Documentation](https://docs.python.org/3/library/wave.html) |
| Calendar | calendar | Calendar data and manipulation | [Calendar Module Documentation](https://docs.python.org/3/library/calendar.html) |
| Colorsys | colorsys | Color space conversions | [Colorsys Module Documentation](https://docs.python.org/3/library/colorsys.html) |
| URL | urllib.parse | URL parsing and manipulation | [Urllib.parse Module Documentation](https://docs.python.org/3/library/urllib.parse.html) |
| UUID | uuid | Universally Unique Identifiers | [UUID Module Documentation](https://docs.python.org/3/library/uuid.html) |
| IP Address | ipaddress | IPv4 and IPv6 manipulation | [IPaddress Module Documentation](https://docs.python.org/3/library/ipaddress.html) |
| Binary | binascii | Binary-to-ASCII conversions | [Binascii Module Documentation](https://docs.python.org/3/library/binascii.html) |
| Quoted-Printable | quopri | Quoted-Printable encoding and decoding | [Quopri Module Documentation](https://docs.python.org/3/library/quopri.html) |
| Apple plist | plistlib | Apple .plist file format | [Plistlib Module Documentation](https://docs.python.org/3/library/plistlist.html) |
| Netrc | netrc | Netrc files format used by ftp | [Netrc Module Documentation](https://docs.python.org/3/library/netrc.html) |
|
closed
|
2023-06-03T19:15:11Z
|
2024-12-31T23:45:13Z
|
https://github.com/saulpw/visidata/issues/1910
|
[
"wishlist"
] |
frosencrantz
| 3
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
computer-vision
| 1,585
|
Training pix2pix with different input size
|
When I try to train the pix2pix model with the `--preprocess`, `--load_size` or `--crop_size` flag I run into an issue.
I try to train the pix2pix model on grayscale 400x400 images. For this I use those parameters: `'--model' 'pix2pix' '--direction' 'BtoA' '--input_nc' '1' '--output_nc' '1' '--load_size' '400' '--crop_size' '400'`.
I get a RuntimeError on line 536 of the networks.py script:
```
def forward(self, x):
if self.outermost:
return self.model(x)
else: # add skip connections
return torch.cat([x, self.model(x)], 1) <--- error
```
`RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 3 but got size 2 for tensor number 1 in the list.`
When evaluating self.model(x) I get a shape of `torch.Size([1, 512, 2, 2])` however, x has the shape `torch.Size([1, 512, 3, 3])`.
I tried different sizes, used even an odd numbers and made the load size bigger than the crop size. I also tried to use RGB images and get rid of the `input_nc` and `output_nc` flag. So far without success.
Does anyone have a hint?
|
open
|
2023-06-27T15:38:42Z
|
2024-06-05T07:18:53Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1585
|
[] |
MrMonk3y
| 3
|
tensorpack/tensorpack
|
tensorflow
| 804
|
Inference with DoReFa-Net giving high top-1/top-5 error
|
Hi, I'm trying to do inference on DoReFa-Net with the ILSVRC2012 validation set as test data
(just to test the setup, not normally a "correct" practice I guess, but should give better-than-actual results right?),
and I'm getting really high top-1/top-5 error rate.
For the first 100 test data, I'm getting top-1/top-5 error around `0.95` and `0.85`,
where the README page says alexnet-126 should give top-1 error of `46.8%`.
Models used are the `alexnet-126.npz` from the model zoo, and also one trained with `alexnet-dorefa.py` with slightly modified parameters.
Report from training in `log.log` looks fine:
```
train-error-top1: 0.41264
train-error-top5: 0.19853
val-error-top1: 0.4913
val-error-top5: 0.25514
```
Tried on 2 setups:
a) python2.7 + tensorflow-gpu ('v1.4.0-19-ga52c8d9', '1.4.1') + tensorpack 0.8.6 + CUDA8 + Tesla K40c + `dorefa.py` from [here](https://github.com/tensorpack/tensorpack/blob/58529de18e9bdad1bab31aed9c397a8f340e7f94/examples/DoReFa-Net/dorefa.py) for TensorFlow<1.7
b) python3.6 + tensorflow 'v1.8.0-3921-gfb3ce04' 1.9.0-rc0 + Intel MKL(for data_format NCHW on CPU) + tensorpack 0.8.6
The inference script was slightly modified from `alexnet-dorefa.py` at the end of the `run_image` function,
just to use the ILSVRC data and also print the top-1/top-5 error, as given below.
Did I do anything silly? Can someone help try this script on your setup?
```python
import cv2
import tensorflow as tf
import argparse
import numpy as np
import os
import sys
from tensorpack import *
from tensorpack.tfutils.summary import add_param_summary
from tensorpack.tfutils.varreplace import remap_variables
from tensorpack.dataflow import dataset
from tensorpack.utils.gpu import get_nr_gpu
from imagenet_utils import get_imagenet_dataflow, fbresnet_augmentor, ImageNetModel
from dorefa import get_dorefa, ternarize
BITW = 1
BITA = 2
BITG = 6
TOTAL_BATCH_SIZE = 256
BATCH_SIZE = None
class Model(ImageNetModel):
weight_decay = 5e-6
weight_decay_pattern = 'fc.*/W'
def get_logits(self, image):
if BITW == 't':
fw, fa, fg = get_dorefa(32, 32, 32)
fw = ternarize
else:
fw, fa, fg = get_dorefa(BITW, BITA, BITG)
# monkey-patch tf.get_variable to apply fw
def new_get_variable(v):
name = v.op.name
# don't binarize first and last layer
if not name.endswith('W') or 'conv0' in name or 'fct' in name:
return v
else:
logger.info("Quantizing weight {}".format(v.op.name))
return fw(v)
def nonlin(x):
if BITA == 32:
return tf.nn.relu(x) # still use relu for 32bit cases
return tf.clip_by_value(x, 0.0, 1.0)
def activate(x):
return fa(nonlin(x))
with remap_variables(new_get_variable), \
argscope([Conv2D, BatchNorm, MaxPooling], data_format='channels_first'), \
argscope(BatchNorm, momentum=0.9, epsilon=1e-4), \
argscope(Conv2D, use_bias=False):
logits = (LinearWrap(image)
.Conv2D('conv0', 96, 12, strides=4, padding='VALID', use_bias=True)
.apply(activate)
.Conv2D('conv1', 256, 5, padding='SAME', split=2)
.apply(fg)
.BatchNorm('bn1')
.MaxPooling('pool1', 3, 2, padding='SAME')
.apply(activate)
.Conv2D('conv2', 384, 3)
.apply(fg)
.BatchNorm('bn2')
.MaxPooling('pool2', 3, 2, padding='SAME')
.apply(activate)
.Conv2D('conv3', 384, 3, split=2)
.apply(fg)
.BatchNorm('bn3')
.apply(activate)
.Conv2D('conv4', 256, 3, split=2)
.apply(fg)
.BatchNorm('bn4')
.MaxPooling('pool4', 3, 2, padding='VALID')
.apply(activate)
.FullyConnected('fc0', 4096)
.apply(fg)
.BatchNorm('bnfc0')
.apply(activate)
.FullyConnected('fc1', 4096, use_bias=False)
.apply(fg)
.BatchNorm('bnfc1')
.apply(nonlin)
.FullyConnected('fct', 1000, use_bias=True)())
add_param_summary(('.*/W', ['histogram', 'rms']))
tf.nn.softmax(logits, name='output') # for prediction
return logits
def optimizer(self):
lr = tf.get_variable('learning_rate', initializer=2e-4, trainable=False)
return tf.train.AdamOptimizer(lr, epsilon=1e-5)
def run_image(model, sess_init, inputs, inLabels=None):
pred_config = PredictConfig(
model=model,
session_init=sess_init,
input_names=['input'],
output_names=['output']
)
predictor = OfflinePredictor(pred_config)
meta = dataset.ILSVRCMeta()
pp_mean = meta.get_per_pixel_mean()
pp_mean_224 = pp_mean[16:-16, 16:-16, :]
words = meta.get_synset_words_1000()
def resize_func(im):
h, w = im.shape[:2]
scale = 256.0 / min(h, w)
desSize = map(int, (max(224, min(w, scale * w)),
max(224, min(h, scale * h))))
im = cv2.resize(im, tuple(desSize), interpolation=cv2.INTER_CUBIC)
return im
transformers = imgaug.AugmentorList([
imgaug.MapImage(resize_func),
imgaug.CenterCrop((224, 224)),
imgaug.MapImage(lambda x: x - pp_mean_224),
])
top1 = 0
top5 = 0
for idx, f in enumerate(inputs):
assert os.path.isfile(f)
img = cv2.imread(f).astype('float32')
assert img is not None
img = transformers.augment(img)[np.newaxis, :, :, :]
outputs = predictor(img)[0]
prob = outputs[0]
ret = prob.argsort()[-10:][::-1]
names = [words[i] for i in ret]
print(f + ":")
print(list(zip(names, prob[ret])))
if inLabels is not None:
label = inLabels[idx]
if label != ret[0]:
top1 += 1
print('top1 failed, label={}, ret[0]={}'.format(label, ret[0]))
if label not in ret[:5]:
top5 += 1
print('top5 failed, ret[:5]={}'.format(ret[:5]))
print('number of test inputs={}'.format(len(inputs)))
print('top-5 err rate={}'.format(float(top5) / len(inputs)))
print('top-1 err rate={}'.format(float(top1) / len(inputs)))
if __name__ == '__main__':
model = './alexnet-126.npz'
#model = './xAlexnet-126.npz'
testpath = './ILSVRC2012_img/val/'
#testset = [f for f in os.listdir(testpath) if os.path.isfile(os.path.join(testpath, f))]
meta = dataset.ILSVRCMeta()
testset = [os.path.join(testpath, f) for f, l in meta.get_image_list('val')]
testlabel = [l for f, l in meta.get_image_list('val')]
run_image(Model(), DictRestore(dict(np.load(model))), testset[:100], testlabel[:100])
```
|
closed
|
2018-06-28T05:19:25Z
|
2018-11-27T04:30:55Z
|
https://github.com/tensorpack/tensorpack/issues/804
|
[
"bug",
"examples"
] |
sammhho
| 13
|
ShishirPatil/gorilla
|
api
| 591
|
[BFCL] Potential bug in calculating accuracy
|
**Describe the issue**
A clear and concise description of what the issue is.
**What is the issue**
I find the calculation of the simple AST score in the non-live part in v2 is not consistent with that in v1.
v1: scores of python, java and js are summed with `calculate_weighted_accuracy`
v2: scores of python, java and js are summed with `calculate_unweighted_accuracy`
Maybe the results on the webpage are not so correct. (I guess the results on current webpage are directly calculated with the results on the previous page and the results in the live part, (score_v1+score_v2)/2 .)
**Proposed Changes**
**Additional context**
Add any other context about the problem here.
|
closed
|
2024-08-21T10:40:10Z
|
2024-09-02T06:43:30Z
|
https://github.com/ShishirPatil/gorilla/issues/591
|
[
"BFCL-General"
] |
XuHwang
| 1
|
strawberry-graphql/strawberry
|
django
| 3,428
|
Add support for framework's specific upload type
|
From https://github.com/strawberry-graphql/strawberry/pull/3389
We should allow users to do things like this:
```python
import strawberry
from starlette.datastructures import UploadFile
@strawberry.type
class Mutation:
@strawberry.mutation
class upload(self, file: UploadFile) -> str:
return "ok"
```
For every framework, so they get better typing 😊
|
open
|
2024-03-30T15:25:51Z
|
2025-03-20T15:56:39Z
|
https://github.com/strawberry-graphql/strawberry/issues/3428
|
[
"feature-request"
] |
patrick91
| 0
|
geex-arts/django-jet
|
django
| 201
|
Feature request: more flexible dashboard layout.
|
* It would be nice if the size of the dashboard modules could be decoupled from the available number of columns
* Have a button (or slider) in the interface to increment or decrement the number of columns.
* Be able to automatically ad a new column, when dragging and dropping a widget to the right of the last column.
* Be able to scroll horizontally, similar to Tweetdeck's idea:

* instead of having columns as the layout backbone, have a grid, that way widgets can be of variable sizes.
One image is worth a thousand words, so I've made a mockup:

|
open
|
2017-04-08T18:23:20Z
|
2017-08-15T16:01:58Z
|
https://github.com/geex-arts/django-jet/issues/201
|
[] |
Ismael-VC
| 5
|
agronholm/anyio
|
asyncio
| 376
|
UNIX socket server hangs on asyncio
|
I have written the following code, trying to create a task that runs a unix-socket-based server for testing. I want to be able to dictate what the server will send when the client tries to read, but for testing I want to be able to get back what was sent to the server. Therefore, I am using a MemoryObjectStream to echo back what the server receives:
```python
from contextlib import asynccontextmanager
from anyio import (
create_task_group,
run,
create_unix_listener,
create_memory_object_stream,
connect_unix,
sleep,
)
from anyio.streams.buffered import BufferedByteReceiveStream
@asynccontextmanager
async def make_test_server():
"""Create a temporary unix server that ."""
send_stream, receive_stream = create_memory_object_stream()
async def handler(client):
async with client:
buffered = BufferedByteReceiveStream(client)
async with send_stream:
while True:
received_bytes = await buffered.receive_until(b'\n', 0xFFFFFFFF)
print('Server sending through queue.')
await send_stream.send(received_bytes)
print('Server sent through queue.')
async with await create_unix_listener('testsocket') as server:
async with create_task_group() as tg:
tg.start_soon(server.serve, handler)
yield receive_stream
tg.cancel_scope.cancel()
async def main():
"""Uses the test server."""
async with make_test_server() as received:
async with await connect_unix('testsocket') as s:
async with received:
for i in range(3):
await s.send(f'data {i}\n'.encode())
print('Fetching from queue.')
print(await received.receive(), flush=True)
print('Fetched from queue.')
run(main)
```
If I use the `trio` backend, the code runs as expected, with the following output:
```
Fetching from queue.
Server sending through queue.
Server sent through queue.
b'data 0'
Fetched from queue.
Server sending through queue.
Fetching from queue.
b'data 1'
Fetched from queue.
Server sent through queue.
Fetching from queue.
Server sending through queue.
Server sent through queue.
b'data 2'
Fetched from queue.
```
However, if I run it with the `asyncio` backend, it hangs:
```
Fetching from queue.
Server sending through queue.
Server sent through queue.
```
Removing the MemoryObjectStream completely from the code also makes the problem go away.
Am I using MemoryObjectStream wrong somehow, or is this maybe a bug?
|
closed
|
2021-10-09T20:23:19Z
|
2021-10-10T19:09:53Z
|
https://github.com/agronholm/anyio/issues/376
|
[
"bug",
"asyncio"
] |
tarcisioe
| 6
|
stanford-oval/storm
|
nlp
| 342
|
[FEATURE] Define model used for embeddings (esp. also azure endpoints)
|
**Describe the bug**
Currently the embedding model is hard-coded in https://github.com/stanford-oval/storm/blob/e80d9bbea7362141a479940dabb751c1f244e4b6/knowledge_storm/encoder.py#L83
This can be an issue in many ways:
- You want to use different model providers
- You don't have enough quota on `text-embedding-3-small`
- You use an Azure region where `text-embedding-3-small` is not available
**Describe the feature**
- Use an Environment Variable to specify the Embedding model, e.g. 'azure/text-embedding-ada-002' as well as the azure endpoint - which could be different to the completion model endpoints.
|
open
|
2025-03-19T11:33:26Z
|
2025-03-19T11:33:42Z
|
https://github.com/stanford-oval/storm/issues/342
|
[] |
danieldekay
| 0
|
globaleaks/globaleaks-whistleblowing-software
|
sqlalchemy
| 3,391
|
Multi-site feature, after first signup the main site is unavailable.
|
### What version of GlobaLeaks are you using?
Latest - new install
### What browser(s) are you seeing the problem on?
All
### What operating system(s) are you seeing the problem on?
Linux
### Describe the issue
I’ve recently installed Globaleaks – when I active the multisite feature, the singup form is available.
But after the first signup has been created and credentials sent out, the main Globaleaks admin is unavailable. – the login screen now displays the newly created multisite as the main site.
### Proposed solution
_No response_
|
closed
|
2023-03-20T12:53:23Z
|
2023-03-22T07:58:55Z
|
https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3391
|
[] |
SJD-DK
| 8
|
miguelgrinberg/Flask-Migrate
|
flask
| 116
|
flask-migrate not detecting 'compare_server_default' in env.py
|
When setting up a new database, my initial migration script does not contain the defaults i put in the model. For example, if in my model.py I have:
`class Devices(db.Model):`
`id = db.Column(db.Integer, primary_key=True, autoincrement=True)`
`name = db.Column(db.String(255), unique=True, nullable=False)`
`enabled = db.Column(db.Boolean, nullable=False, default=1)`
And in my migrations/env.py I change the run_migrations_online(): to have:
`context.configure(connection=connection, target_metadata=target_metadata,`
`process_revision_directives=process_revision_directives,`
`compare_server_default=True,`
`**current_app.extensions['migrate'].configure_args)`
My resulting migrations/versions/foo.py doesn't have any defaults. What am I doing wrong?
|
closed
|
2016-06-13T13:36:50Z
|
2016-06-14T05:48:35Z
|
https://github.com/miguelgrinberg/Flask-Migrate/issues/116
|
[
"invalid"
] |
kilgorejd
| 4
|
JaidedAI/EasyOCR
|
deep-learning
| 1,303
|
Fine-tuning the model for reading both English and Arabic
|
I am working on a project where the text is a mix of English and Arabic and even though EasyOCR is made to recognize both I am facing the following issue:
Once the model comes across a slightly unclear English character, it immediately switches to Arabic and starts giving indecipherable results. However those same examples were read with high accuracy when I set it to only read English.
Could this be fixed by fine-tuning it on both languages? Are there other suggestions? Please let me know.
|
open
|
2024-09-06T05:41:44Z
|
2024-09-06T05:41:44Z
|
https://github.com/JaidedAI/EasyOCR/issues/1303
|
[] |
mariam-alsaleh
| 0
|
wger-project/wger
|
django
| 942
|
Unable to locate static assets while running in container
|
When running in a Docker container on Kubernetes, I am seeing errors about not being able to find static assets like CSS and Javascript files. It was originally running correctly, and I'm not sure at all what changed when it stopped finding them. I have all static assets stored on a persistent volume mounted to /home/wger/static in the container. The K8s deployment is configured with the DJANGO_STATIC_ROOT environment variable, and the volume is accessible from the container.
I will be happy to provide any additional information about my setup. I honestly don't really know how to test or verify the static assets within Django specifically, though. Here are some screenshots of examples of what I have configured and what I see, but I can copy the full deployment configuration if needed.





|
open
|
2022-01-13T03:46:13Z
|
2024-09-17T04:53:07Z
|
https://github.com/wger-project/wger/issues/942
|
[] |
venom85
| 7
|
pyg-team/pytorch_geometric
|
pytorch
| 9,685
|
Distributed error when using 4 nodes
|
### 🐛 Describe the bug
I encountered an issue while testing `examples/distributed/pyg/node_ogb_cpu.py` with four nodes and small batch sizes (e.g., 10). Using `batch_size=1` triggers the error immediately. The script fails if all source nodes belong to the same partition when executing the following line in `torch_geometric/distributed/dist_neighbor_sampler.py` within the `_get_sampler_output` function:
```cumsum_neighbors_per_node = outputs[p_id].metadata[0]```
Error message: `AttributeError: 'NoneType' object has no attribute 'metadata'`
I noticed that when the `p_id` is 0, but outputs[0] is None: `[None, None, SamplerOutput(...), None]`
It seems that `p_id` is being computed incorrectly in the following code segment:
```
if not local_only:
# Src nodes are remote
res_fut_list = await to_asyncio_future(
torch.futures.collect_all(futs))
for i, res_fut in enumerate(res_fut_list):
p_id = (self.graph_store.partition_idx + i +
1) % self.graph_store.num_partitions
p_outputs.pop(p_id)
p_outputs.insert(p_id, res_fut.wait())
```
### Versions
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] torch==2.4.1
[pip3] torch_cluster==1.6.3+pt24cu124
[pip3] torch-geometric==2.6.0
[pip3] torch_scatter==2.1.2+pt24cu124
[pip3] torch_sparse==0.6.18+pt24cu124
[pip3] torch_spline_conv==1.2.2+pt24cu124
[pip3] torchaudio==2.4.1
[pip3] torchvision==0.19.1
[pip3] triton==3.0.0
[conda] blas 1.0 mkl conda-forge
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] numpy 2.0.2 pypi_0 pypi
[conda] pytorch 2.4.1 py3.12_cuda12.4_cudnn9.1.0_0 pytorch
[conda] pytorch-cuda 12.4 hc786d27_6 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch-cluster 1.6.3+pt24cu124 pypi_0 pypi
[conda] torch-geometric 2.6.0 pypi_0 pypi
[conda] torch-scatter 2.1.2+pt24cu124 pypi_0 pypi
[conda] torch-sparse 0.6.18+pt24cu124 pypi_0 pypi
[conda] torch-spline-conv 1.2.2+pt24cu124 pypi_0 pypi
[conda] torchaudio 2.4.1 py312_cu124 pytorch
[conda] torchtriton 3.0.0 py312 pytorch
[conda] torchvision 0.19.1 py312_cu124 pytorch
|
open
|
2024-10-01T16:59:15Z
|
2024-10-03T18:10:13Z
|
https://github.com/pyg-team/pytorch_geometric/issues/9685
|
[
"bug",
"distributed"
] |
seakkas
| 0
|
pallets/flask
|
python
| 5,268
|
When was 'Blueprint.before_app_first_request' deprecated and what replaces it?
|
I have resumed work on a Flask 1.x-based project after some time has passed, and sorting through deprecations and removals that have happened in the interim. The first place I got stuck was the `@bp.before_app_first_request` decorator.
Except to note that <q>`app.before_first_request` and `bp.before_app_first_request` decorators are removed</q> in 2.3.0, [the changelog](https://flask.palletsprojects.com/en/2.3.x/changes/#version-2-3-0) doesn't mention when these were deprecated, why, or what replaced them.
Perusing previous versions of the docs, I discover that `Blueprint.before_app_first_request` was deprecated in 2.2, with this note:
> Deprecated since version 2.2: Will be removed in Flask 2.3. Run setup code when creating the application instead.
Something in my gut tells me that I'm going to be writing a kind of hook handler that will iterate through all the blueprints to run their "before first request" initialization routines. Effectively, an amateurish reimplementation of what `Blueprint.before_app_first_request` already did.
That leaves me wondering about the context for this deprecation. Can someone enlighten me as to why it was removed?
_To be clear, I intend to document whatever workaround I come up with (for the benefit of others in this situation) and close this issue myself. I'm not asking for tech support or "homework help" from the Flask maintainers, or to resurrect the feature from the grave. `:)`_
|
closed
|
2023-09-29T23:06:44Z
|
2023-10-15T00:06:15Z
|
https://github.com/pallets/flask/issues/5268
|
[] |
ernstki
| 1
|
youfou/wxpy
|
api
| 325
|
请问自动处理消息能对回复信息进行处理吗?
|
比如以某关键字开头的进入自动处理消息函数,然后返回文本等用户回复“是"或”否",回复"是“继续处理,”否“则不处理,继续监听下一个请求?请问如何取得返回文本并进行处理?
|
open
|
2018-07-23T09:57:20Z
|
2018-07-24T07:10:20Z
|
https://github.com/youfou/wxpy/issues/325
|
[] |
fireclaw
| 2
|
google/seq2seq
|
tensorflow
| 206
|
Evaluation takes too much time. Is it normal?
|
The dev data contains 5000 sequences.
============================================================
INFO:tensorflow:Creating ZeroBridge in mode=eval
INFO:tensorflow:
ZeroBridge: {}
INFO:tensorflow:Starting evaluation at 2017-05-04-07:43:55
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080,
pci bus id: 0000:01:00.0)
W tensorflow/core/framework/op_kernel.cc:993] Out of range: FIFOQueue '_30_dev_input_fn/parallel_read_1/common_queue' is closed an
d has insufficient elements (requested 1, current size 0)
[[Node: dev_input_fn/parallel_read_1/common_queue_Dequeue = QueueDequeueV2[component_types=[DT_STRING, DT_STRING], timeou
t_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](dev_input_fn/parallel_read_1/common_queue)]]
W tensorflow/core/framework/op_kernel.cc:993] Out of range: FIFOQueue '_30_dev_input_fn/parallel_read_1/common_queue' is closed an
d has insufficient elements (requested 1, current size 0)
[[Node: dev_input_fn/parallel_read_1/common_queue_Dequeue = QueueDequeueV2[component_types=[DT_STRING, DT_STRING], timeou
t_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](dev_input_fn/parallel_read_1/common_queue)]]
W tensorflow/core/framework/op_kernel.cc:993] Out of range: PaddingFIFOQueue '_29_dev_input_fn/batch_queue/padding_fifo_queue' is
closed and has insufficient elements (requested 64, current size 0)
[[Node: dev_input_fn/batch_queue = QueueDequeueUpToV2[component_types=[DT_INT32, DT_STRING, DT_INT32, DT_STRING], timeout
_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](dev_input_fn/batch_queue/padding_fifo_queue, dev_input_fn/batch_queue/n)]
]
W tensorflow/core/framework/op_kernel.cc:993] Out of range: PaddingFIFOQueue '_29_dev_input_fn/batch_queue/padding_fifo_queue' is
closed and has insufficient elements (requested 64, current size 0)
[[Node: dev_input_fn/batch_queue = QueueDequeueUpToV2[component_types=[DT_INT32, DT_STRING, DT_INT32, DT_STRING], timeout
_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](dev_input_fn/batch_queue/padding_fifo_queue, dev_input_fn/batch_queue/n)]
]
W tensorflow/core/framework/op_kernel.cc:993] Out of range: PaddingFIFOQueue '_29_dev_input_fn/batch_queue/padding_fifo_queue' is closed and has insufficient elements (requested 64, current size 0)
[[Node: dev_input_fn/batch_queue = QueueDequeueUpToV2[component_types=[DT_INT32, DT_STRING, DT_INT32, DT_STRING], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](dev_input_fn/batch_queue/padding_fifo_queue, dev_input_fn/batch_queue/n)]]
W tensorflow/core/framework/op_kernel.cc:993] Out of range: PaddingFIFOQueue '_29_dev_input_fn/batch_queue/padding_fifo_queue' is closed and has insufficient elements (requested 64, current size 0)
[[Node: dev_input_fn/batch_queue = QueueDequeueUpToV2[component_types=[DT_INT32, DT_STRING, DT_INT32, DT_STRING], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](dev_input_fn/batch_queue/padding_fifo_queue, dev_input_fn/batch_queue/n)]]
W tensorflow/core/framework/op_kernel.cc:993] Out of range: PaddingFIFOQueue '_29_dev_input_fn/batch_queue/padding_fifo_queue' is closed and has insufficient elements (requested 64, current size 0)
[[Node: dev_input_fn/batch_queue = QueueDequeueUpToV2[component_types=[DT_INT32, DT_STRING, DT_INT32, DT_STRING], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](dev_input_fn/batch_queue/padding_fifo_queue, dev_input_fn/batch_queue/n)]]
W tensorflow/core/framework/op_kernel.cc:993] Out of range: PaddingFIFOQueue '_29_dev_input_fn/batch_queue/padding_fifo_queue' is closed and has insufficient elements (requested 64, current size 0)
[[Node: dev_input_fn/batch_queue = QueueDequeueUpToV2[component_types=[DT_INT32, DT_STRING, DT_INT32, DT_STRING], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](dev_input_fn/batch_queue/padding_fifo_queue, dev_input_fn/batch_queue/n)]]
W tensorflow/core/framework/op_kernel.cc:993] Out of range: PaddingFIFOQueue '_29_dev_input_fn/batch_queue/padding_fifo_queue' is closed and has insufficient elements (requested 64, current size 0)
[[Node: dev_input_fn/batch_queue = QueueDequeueUpToV2[component_types=[DT_INT32, DT_STRING, DT_INT32, DT_STRING], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](dev_input_fn/batch_queue/padding_fifo_queue, dev_input_fn/batch_queue/n)]]
INFO:tensorflow:Finished evaluation at 2017-05-04-08:59:35
INFO:tensorflow:Saving dict for global step 1001: bleu = 16.43, global_step = 1001, log_perplexity = 1.39028, loss = 1.36901, rouge_1/f_score = 0.746607, rouge_1/p_score = 0.945662, rouge_1/r_score = 0.628931, rouge_2/f_score = 0.45424, rouge_2/p_score = 0.582968, rouge_2/r_score = 0.384094, rouge_l/f_score = 0.354937
WARNING:tensorflow:Skipping summary for global_step, must be a float or np.float32.
|
closed
|
2017-05-04T09:57:28Z
|
2017-05-17T15:00:36Z
|
https://github.com/google/seq2seq/issues/206
|
[] |
yezhejack
| 6
|
ageitgey/face_recognition
|
machine-learning
| 873
|
Error while installing face_recognition
|
* face_recognition version:-
* Python version:3.5.3
* Operating System:Raspbian Stretch
### Description
I'm a beginner to all this, i need to install face recognition on Raspberry Pi 3 B+, but this error comes.. can somebody provide a solution for this
### What I Did
I have followed Adrian's tutorial (https://www.pyimagesearch.com/2018/06/18/face-recognition-with-opencv-python-and-deep-learning) on face recognition and installed on my laptop successfully.. Now I wants to install it on a Pi, Pi 3 B+ with 64 GB SD card, how ever after so many try i managed to install OpenCV 3.3 and dlib, reached till installing face_recognition library.. but this error happens..
```
(cv) pi@raspberrypi:~ $ pip install face_recognition
Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple
Collecting face_recognition
Using cached https://files.pythonhosted.org/packages/3f/ed/ad9a28042f373d4633fc8b49109b623597d6f193d3bbbef7780a5ee8eef2/face_recognition-1.2.3-py2.py3-none-any.whl
Collecting face-recognition-models>=0.3.0 (from face_recognition)
Downloading https://www.piwheels.org/simple/face-recognition-models/face_recognition_models-0.3.0-py2.py3-none-any.whl (100.6MB)
|███▏ | 10.0MB 810kB/s eta 0:01:52
Collecting Click>=6.0 (from face_recognition)
Using cached https://files.pythonhosted.org/packages/fa/37/45185cb5abbc30d7257104c434fe0b07e5a195a6847506c074527aa599ec/Click-7.0-py2.py3-none-any.whl
Requirement already satisfied: dlib>=19.7 in ./.virtualenvs/cv/lib/python3.5/site-packages (from face_recognition) (19.17.0)
Collecting Pillow (from face_recognition)
Using cached https://files.pythonhosted.org/packages/51/fe/18125dc680720e4c3086dd3f5f95d80057c41ab98326877fc7d3ff6d0ee5/Pillow-6.1.0.tar.gz
Requirement already satisfied: numpy in ./.virtualenvs/cv/lib/python3.5/site-packages (from face_recognition) (1.16.4)
ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE. If you have updated the package versions, please update the hashes. Otherwise, examine the package contents carefully; someone may have tampered with them.
face-recognition-models>=0.3.0 from https://www.piwheels.org/simple/face-recognition-models/face_recognition_models-0.3.0-py2.py3-none-any.whl#sha256=8d6b0af2e37a17120c3f13107974bc252142a4ffcb4e58eabdfcf26608e52c24 (from face_recognition):
Expected sha256 8d6b0af2e37a17120c3f13107974bc252142a4ffcb4e58eabdfcf26608e52c24
Got e26ccf08c73972b06e31f33add15e7eedd1c3deae787bc7d7a3fdd3a5fcf6d02
```
|
closed
|
2019-07-03T11:13:39Z
|
2019-07-15T14:41:08Z
|
https://github.com/ageitgey/face_recognition/issues/873
|
[] |
Sanjaiii
| 3
|
roboflow/supervision
|
pytorch
| 1,339
|
draw wrong OrientedBoxAnnotator with InferenceSlicer
|
### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar bug report.
### Bug
I'm using official yolov8m_obb.pt and if I write code like below
```python
def callback(image_slice: np.ndarray) -> sv.Detections:
result = model(image_slice)[0]
return sv.Detections.from_ultralytics(result)
slicer = sv.InferenceSlicer(callback=callback, thread_workers=1)
detections = slicer(image)
bounding_box_annotator = sv.OrientedBoxAnnotator()
annotated_image = bounding_box_annotator.annotate(scene=image, detections=detections)
```
The OrientedBoxAnnotator in annotated_image was in the wrong place, and if i check variables inside detections
it shows like below
```text
Detections(xyxy=array([[ 759.02, 8.0095, 770.99, 12.812]]), mask=None, confidence=array([ 0.31441], dtype=float32), class_id=array([1]), tracker_id=None, data={'xyxyxyxy': array([[[ 247.02, 8.3203],
[ 247.13, 12.812],
[ 258.99, 12.501],
[ 258.87, 8.0095]]], dtype=float32), 'class_name': array(['Van'], dtype='<U13')})
```
which xyxy can't match xyxyxyxy and boxes inside picture are wrong too

but if i don't use the InferenceSlicer, then the answer is correct
### Environment
```toml
python = 3.9
dependencies = [
"ultralytics~=8.2.45",
"opencv-contrib-python~=4.10.0.84",
"onnx~=1.16.1",
"supervision~=0.21.0",
]
```
### Minimal Reproducible Example
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR!
|
closed
|
2024-07-10T05:52:36Z
|
2024-07-18T13:46:39Z
|
https://github.com/roboflow/supervision/issues/1339
|
[
"bug"
] |
DawnMagnet
| 7
|
indico/indico
|
flask
| 6,120
|
Room Booking: Horizontal padding of `timeline-occurrence` causes visual / selectability issues in monthly view
|
Due to the horizontal padding in:
https://github.com/indico/indico/blob/8ff31c228008a9847a10cbe70952c160f8c79630/indico/modules/rb/client/js/common/timeline/TimelineItem.module.scss#L21
bookings which are close to each other overlap in the monthly view, for example:

For this reason, some occurrences can not be selected (or selecting them is barely possible).
Removing the horizontal padding from the SCSS makes things work as expected :wink: .
|
open
|
2024-01-09T10:06:50Z
|
2024-01-10T15:28:04Z
|
https://github.com/indico/indico/issues/6120
|
[] |
olifre
| 2
|
xinntao/Real-ESRGAN
|
pytorch
| 97
|
learning rate not updating in finetetuning
|
I am getting lr=2e-4 all the time. I tried to update `options/finetune_realesrgan_x4plus_pairdata.yml` but still the lr not updating
|
open
|
2021-09-26T05:37:25Z
|
2021-09-27T06:32:39Z
|
https://github.com/xinntao/Real-ESRGAN/issues/97
|
[] |
Shahidul1004
| 1
|
cupy/cupy
|
numpy
| 8,674
|
Rawkernel reduces the precision of complex number calculation
|
### Description
I tried to use jit.rawkernel to do some complex calculation, but the result have some small errors. The following code gives an example. I have explicitly specified the data type as complex128.
### Reproduce
```python
from cupyx import jit
import cupy as cp
from numba import cuda
import numpy as np
@jit.rawkernel()
def cupy_test(y):
tid = jit.blockIdx.x * jit.blockDim.x + jit.threadIdx.x
wavek = -2 * 3.1415926 / 1.5e-6
for j in range(734):
y[tid] += cp.complex128(wavek * 1.0j)
@cuda.jit
def numba_test(y):
tid = cuda.blockIdx.x * cuda.blockDim.x + cuda.threadIdx.x
wavek = -2 * 3.1415926 / 1.5e-6
for j in range(734):
y[tid] += complex(0, wavek)
size = 32 * 32
x = cp.empty((size,), dtype=cp.complex128)
x[:] = 0.0 + 0.0j
y = np.empty((size,), dtype=np.complex128)
y[:] = 0.0 + 0.0j
z = np.empty((size,), dtype=np.complex128)
z[:] = 0.0 + 0.0j
cupy_test[32,32](x)
numba_test[32,32](y)
wavek = -2 * 3.1415926 / 1.5e-6
for i in range(734):
z[:] += cp.complex128(wavek * 1.j)
print("For cupy-kernel, the result is {}".format(x[0]))
print("For numba, the result is {}".format(y[0]))
print("For cupy, the result is {}".format(z[0]))
print("The real result is {}".format(wavek * 734 * 1.j))
```
Then the output is
```Plaintext
For cupy-kernel, the result is -3074572043.5j
For numba, the result is -3074571957.8666654j
For cupy, the result is -3074571957.8666654j
The real result is (-0-3074571957.866667j)
```
### Environment
```shell
❯ python -c "import cupy; cupy.show_config()"
OS : Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Python Version : 3.11.6
CuPy Version : 13.3.0
CuPy Platform : NVIDIA CUDA
NumPy Version : 1.24.4
SciPy Version : 1.11.3
Cython Build Version : 0.29.37
Cython Runtime Version : None
CUDA Root : /usr/local/cuda-12.2
nvcc PATH : /usr/local/cuda-12.2/bin/nvcc
CUDA Build Version : 12060
CUDA Driver Version : 12020
CUDA Runtime Version : 12060 (linked to CuPy) / 12060 (locally installed)
CUDA Extra Include Dirs : ['/home/gqzhang/miniconda3/envs/optisim/targets/x86_64-linux/include', '/home/gqzhang/miniconda3/envs/optisim/include']
cuBLAS Version : (available)
cuFFT Version : 11206
cuRAND Version : 10307
cuSOLVER Version : (11, 6, 4)
cuSPARSE Version : (available)
NVRTC Version : (12, 6)
Thrust Version : 200500
CUB Build Version : 200600
Jitify Build Version : <unknown>
cuDNN Build Version : None
cuDNN Version : None
NCCL Build Version : None
NCCL Runtime Version : None
cuTENSOR Version : None
cuSPARSELt Build Version : None
Device 0 Name : NVIDIA GeForce RTX 4060 Laptop GPU
Device 0 Compute Capability : 89
Device 0 PCI Bus ID : 0000:01:00.0
```
|
closed
|
2024-10-17T07:06:45Z
|
2024-10-18T06:34:53Z
|
https://github.com/cupy/cupy/issues/8674
|
[
"issue-checked"
] |
astrogqzhang
| 5
|
ipython/ipython
|
jupyter
| 13,897
|
Remove newline_with_copy_margin.
|
It has been deprecated for at least 2 versions. We can remove this.
_Originally posted by @Carreau in https://github.com/ipython/ipython/pull/13888#discussion_r1071293652_
|
closed
|
2023-01-17T08:29:47Z
|
2023-02-21T13:14:14Z
|
https://github.com/ipython/ipython/issues/13897
|
[] |
Carreau
| 1
|
open-mmlab/mmdetection
|
pytorch
| 11,691
|
BUG: clip_grad doesn't work
|
Hi,
Although I set clip_grad as below, but the grad_norm still explodes... Can anyone help me?
```
optim_wrapper = dict(
type='OptimWrapper',
optimizer=dict(
_delete_=True, type='AdamW', lr=0.0001, weight_decay=0.0001),
paramwise_cfg=dict(
custom_keys={'backbone': dict(lr_mult=0.1, decay_mult=1.0)}),
clip_grad=dict(max_norm=35., norm_type=2))
```

|
open
|
2024-05-08T15:33:30Z
|
2024-05-08T15:33:48Z
|
https://github.com/open-mmlab/mmdetection/issues/11691
|
[] |
Pixie8888
| 0
|
thp/urlwatch
|
automation
| 665
|
How can i add a custom reporter in the urlwatch?like wechat?
|
I want to push the msg to WeChat. The official account on WeChat provides a means, its web:`http://pushplus.hxtrip.com/message`, which needs to be cleared by using WeChat account.
to be clear,its Call method is very easy.just need token and content,then it submit a post or get request to the server,and my wechat account can receive the msg.
|
open
|
2021-08-29T11:33:22Z
|
2021-11-07T09:02:22Z
|
https://github.com/thp/urlwatch/issues/665
|
[
"question"
] |
ZJahon
| 1
|
google-deepmind/graph_nets
|
tensorflow
| 15
|
EncodeProcessDecode model documentation
|
Hi,
I've been trying to understand the EncodeProcessDecode model parameters, however, the documentation is a little scarce on the parameters (edge_output_size, node_output_size, global_output_size) in this case.
In the code I found the following comment `# Transforms the outputs into the appropriate shapes.` but that doesn't clarify the exact reshaping. Can anybody clarify?
Thanks a lot!
|
closed
|
2018-11-07T21:15:42Z
|
2018-11-10T19:51:25Z
|
https://github.com/google-deepmind/graph_nets/issues/15
|
[] |
ferreirafabio
| 7
|
OpenInterpreter/open-interpreter
|
python
| 957
|
Open-Interpreter not Starting
|
### Describe the bug
When I run the `interpreter` command, I get:
`Traceback (most recent call last):
File "/usr/local/bin/interpreter", line 5, in <module>
from interpreter import interpreter
File "/usr/local/lib/python3.10/site-packages/interpreter/__init__.py", line 1, in <module>
from .core.core import OpenInterpreter
File "/usr/local/lib/python3.10/site-packages/interpreter/core/core.py", line 10, in <module>
from ..terminal_interface.start_terminal_interface import start_terminal_interface
File "/usr/local/lib/python3.10/site-packages/interpreter/terminal_interface/start_terminal_interface.py", line 16, in <module>
from .validate_llm_settings import validate_llm_settings
File "/usr/local/lib/python3.10/site-packages/interpreter/terminal_interface/validate_llm_settings.py", line 5, in <module>
import litellm
File "/usr/local/lib/python3.10/site-packages/litellm/__init__.py", line 472, in <module>
from .timeout import timeout
File "/usr/local/lib/python3.10/site-packages/litellm/timeout.py", line 20, in <module>
from litellm.exceptions import Timeout
File "/usr/local/lib/python3.10/site-packages/litellm/exceptions.py", line 12, in <module>
from openai import (
ImportError: cannot import name 'AuthenticationError' from 'openai' (/usr/local/lib/python3.10/site-packages/openai/__init__.py)
`
### Reproduce
Running `interpreter`
### Expected behavior
Interpreter startup
### Screenshots
_No response_
### Open Interpreter version
0.2.0
### Python version
3.10.8
### Operating System name and version
MacOS 12
### Additional context
There's a closed issue about this error here: https://github.com/KillianLucas/open-interpreter/issues/859#issue-2062077441
The bug should've been completely fixed in `0.2.0`, but I'm still experiencing it.
I commented in the thread but didn't have access to reopen the issue, so I decided to open this.
I'm experiencing this with `open-interpreter=0.2.0` and `openai==1.9.0` on MacOS. I tried downgrading each library to see if that would change anything, but it didn't.
|
closed
|
2024-01-22T00:36:09Z
|
2024-02-12T10:41:02Z
|
https://github.com/OpenInterpreter/open-interpreter/issues/957
|
[
"Bug"
] |
mayowaosibodu
| 2
|
keras-team/keras
|
machine-learning
| 20,388
|
Inconsistent loss/metrics with jax backend
|
Training an LSTM-based model with `mean_squared_error` loss, I got the following training results, for which the math doesn't add up: the values of the loss (MSE) and metric (RMSE) are inconsistent.
Would anyone have an insight as to what could be happening here? Thank you in advance.
<img width="1859" alt="Screenshot 2024-10-21 at 23 33 39" src="https://github.com/user-attachments/assets/f60b95bc-5e07-45c4-8cee-5e33bbcc7e0c">
|
closed
|
2024-10-21T21:37:52Z
|
2024-11-12T12:39:49Z
|
https://github.com/keras-team/keras/issues/20388
|
[
"stat:awaiting response from contributor",
"stale",
"type:Bug"
] |
dkgaraujo
| 9
|
gevent/gevent
|
asyncio
| 1,816
|
Calling gevent.sleep from a thread creates an anon_inode and eventually "too many files open" error
|
* gevent version: 1.4.0 and 21.8.0 on Raspberry Pi installed using pip3
* Python version: 3.7.3
* Operating System: Please be as specific as possible. For example,
"Raspbian Buster"
### Description:
When gevent.sleep() is called from another thread, an anon_inode is created. If this happens enough, a "too many files open" error from the OS will be produced. For example, on a Raspberry Pi the default maximum number of file descriptors that a single process can have open is 1024. So if gevent.sleep() is called 1024 times from a thread, this error will occur.
```python traceback
SystemError: (libev) error creating signal/async pipe: Too many open files
2021-08-28T14:33:11Z
Traceback (most recent call last):
File "src/gevent/libev/corecext.pyx", line 1327, in gevent.libev.corecext._syserr_cb
File "src/gevent/libev/corecext.pyx", line 571, in gevent.libev.corecext.loop._handle_syserr
File "src/gevent/libev/corecext.pyx", line 584, in gevent.libev.corecext.loop.handle_error
File "/home/pi/.local/lib/python3.7/site-packages/gevent/hub.py", line 543, in handle_error
File "/home/pi/.local/lib/python3.7/site-packages/gevent/hub.py", line 569, in handle_system_error
greenlet.error: cannot switch to a different thread
(libev) error creating signal/async pipe: Too many open files
Aborted
```
### To Reproduce
Minimal script to produce issue (reproducible on a Raspberry Pi running standard OS):
```python
import gevent
import _thread
import time
def newdelay():
gevent.sleep(.001)
print("done sleeping in thread - this created an a_inode")
return
while 1:
time.sleep(0.01)
_thread.start_new_thread(newdelay,())
```
After this script is started, check the number of file descriptors that it has open by running (it will increase quickly):
`lsof -p [PID of script] | wc -l`
After about 10 seconds the "too many files open" error will occur.
|
closed
|
2021-08-28T14:37:35Z
|
2021-08-30T20:42:54Z
|
https://github.com/gevent/gevent/issues/1816
|
[
"Type: Question"
] |
aaknitt
| 2
|
capitalone/DataProfiler
|
pandas
| 415
|
Unable to get Month as a label
|
Using DP version **c1-dataprofiler[ml]==0.7.1**
DP with data that has month it is labelled as `unknown`
Below the example that can be used.
`dp.Profiler(['MARCH']*30).report()`
|
open
|
2021-09-16T20:30:39Z
|
2021-09-16T20:30:39Z
|
https://github.com/capitalone/DataProfiler/issues/415
|
[] |
dipika-m
| 0
|
streamlit/streamlit
|
machine-learning
| 10,745
|
Support masked input in `st.text_input`
|
### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
Allow the user to specify a `mask` for `st.text_input` to enforce specific input formats:

<img width="588" alt="Image" src="https://github.com/user-attachments/assets/492f42af-f89f-4b7f-8d76-bd7390939f1c" />
### Why?
_No response_
### How?
```python
st.text_input("Phone number", mask="(999) 999-9999")
```
### Additional Context
- Baseweb provides this [via `MaskedInput` & the `mask` parameter](https://baseweb.design/components/input/#maskedinput)). It uses the [react-input-mask library](https://github.com/sanniassin/react-input-mask).
|
open
|
2025-03-12T15:43:44Z
|
2025-03-17T12:43:25Z
|
https://github.com/streamlit/streamlit/issues/10745
|
[
"type:enhancement",
"feature:st.text_input"
] |
lukasmasuch
| 2
|
pinry/pinry
|
django
| 196
|
Official docker image
|
Is there an official docker image?
|
closed
|
2020-05-08T16:00:26Z
|
2020-05-17T04:45:56Z
|
https://github.com/pinry/pinry/issues/196
|
[] |
lonix1
| 12
|
axnsan12/drf-yasg
|
django
| 698
|
customized pagination class is not support?
|
I'm using `pagination_class = newPagination` in the view file.
1.newPagination is a customized class base on `rest_framework.pagination.PageNumberPagination`.(override func ` get_paginated_response`)
2.I got correct Respons result in web test.
3.`drf-yasg` shows the Respons result by the original `rest_framework.pagination.PageNumberPagination.get_paginated_response()`.(override func is not work)


|
open
|
2021-02-03T16:56:38Z
|
2025-03-07T12:13:28Z
|
https://github.com/axnsan12/drf-yasg/issues/698
|
[
"triage"
] |
MrBeike
| 7
|
deepinsight/insightface
|
pytorch
| 1,798
|
Resume training ArcFace-Torchh but it starts all from beginning and epoch is set to 0
|
Hi, Thanks for your nice work!
I am using Colab to train ArcFace-Torch on Asian-Celeb dataset. After Colab disconects, I want to continue training from where it left by setting config.resume = True and successfully loading the latest backbone.pth file. However, the training process starts all over from epoch 0, acc and loss seems to be same as acc and loss of the first time I train from scratch, and even worse. Did I successfully resume training? What should I do to fix it? Thank in advance.
|
open
|
2021-10-22T09:26:53Z
|
2021-10-22T10:20:58Z
|
https://github.com/deepinsight/insightface/issues/1798
|
[] |
gradient1706
| 1
|
quokkaproject/quokka
|
flask
| 133
|
replace Misake with Mistune
|
https://github.com/lepture/mistune
|
closed
|
2014-02-28T20:18:51Z
|
2015-07-16T02:56:34Z
|
https://github.com/quokkaproject/quokka/issues/133
|
[
"enhancement"
] |
rochacbruno
| 2
|
davidsandberg/facenet
|
computer-vision
| 1,213
|
Broken link
|
The link to the Training dataset is attached to the readme is broken
https://github.com/davidsandberg/facenet#training-data
|
open
|
2021-12-17T12:56:49Z
|
2021-12-17T12:56:49Z
|
https://github.com/davidsandberg/facenet/issues/1213
|
[] |
owos
| 0
|
xinntao/Real-ESRGAN
|
pytorch
| 726
|
Real-ESRGAN Not Using my GPU
|
Hello, I've been having this issue since I restored my computer, Im not sure if there's something missing but when I start the process (to reescale a image), I get this error/message:
(base) D:\Ai\bycloud\Real-ESRGAN-master>python inference_realesrgan.py --model_path experiments/pretrained_models/RealESRGAN_x4plus.pth --input inputs --tile 64
C:\Users\User\anaconda3\Lib\site-packages\torchvision\transforms\functional_tensor.py:5: UserWarning: The torchvision.transforms.functional_tensor module is deprecated in 0.15 and will be **removed in 0.17**. Please don't rely on it. You probably just need to use APIs in torchvision.transforms.functional or in torchvision.transforms.v2.functional.
warnings.warn(
Testing 0 Pretty
|
open
|
2023-11-30T06:41:50Z
|
2023-11-30T06:41:50Z
|
https://github.com/xinntao/Real-ESRGAN/issues/726
|
[] |
St33pFx
| 0
|
apachecn/ailearning
|
python
| 212
|
Adaboost多类别分类问题
|
你好!感谢你的代码,请问Adaboost 用作多类别分类时该怎么实现?比如我这边有20个类别。
|
closed
|
2017-11-21T05:37:35Z
|
2017-11-21T10:51:53Z
|
https://github.com/apachecn/ailearning/issues/212
|
[] |
lxtGH
| 1
|
Nemo2011/bilibili-api
|
api
| 323
|
[漏洞] login 异常
|

|
closed
|
2023-06-05T23:56:56Z
|
2023-06-08T01:52:40Z
|
https://github.com/Nemo2011/bilibili-api/issues/323
|
[
"bug"
] |
z0z0r4
| 4
|
deepfakes/faceswap
|
machine-learning
| 735
|
ERROR Caught exception in thread: 'training_0'//original type is ok,but any other model cannot succeed
|
wanqi@amax:~/Desktop/faceswap-master$ python faceswap.py train -A faces/source -B faces/lyf -m models/lyf-128/ -p -t realface
05/22/2019 09:57:53 INFO Log level set to: INFO
Using TensorFlow backend.
05/22/2019 09:57:57 INFO Model A Directory: /home/wanqi/Desktop/faceswap-master/faces/source
05/22/2019 09:57:57 INFO Model B Directory: /home/wanqi/Desktop/faceswap-master/faces/lyf
05/22/2019 09:57:57 INFO Training data directory: /home/wanqi/Desktop/faceswap-master/models/lyf-128
05/22/2019 09:57:57 INFO ===============================================
05/22/2019 09:57:57 INFO - Starting -
05/22/2019 09:57:57 INFO - Using live preview -
05/22/2019 09:57:57 INFO - Press 'ENTER' to save and quit -
05/22/2019 09:57:57 INFO - Press 'S' to save model weights immediately -
05/22/2019 09:57:57 INFO ===============================================
05/22/2019 09:57:58 INFO Loading data, this may take a while...
05/22/2019 09:57:58 INFO Loading Model from Realface plugin...
05/22/2019 09:58:04 WARNING No existing state file found. Generating.
05/22/2019 09:58:27 INFO Creating new 'realface' model in folder: '/home/wanqi/Desktop/faceswap-master/models/lyf-128'
05/22/2019 09:58:28 INFO Loading Trainer from Original plugin...
05/22/2019 09:58:29 CRITICAL Error caught! Exiting...
05/22/2019 09:58:29 ERROR Caught exception in thread: 'training_0'
You are using pip version 10.0.1, however version 19.1.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
05/22/2019 09:58:39 ERROR Got Exception on main handler:
Traceback (most recent call last):
File "/home/wanqi/Desktop/faceswap-master/lib/cli.py", line 110, in execute_script
process.process()
File "/home/wanqi/Desktop/faceswap-master/scripts/train.py", line 98, in process
self.end_thread(thread, err)
File "/home/wanqi/Desktop/faceswap-master/scripts/train.py", line 123, in end_thread
thread.join()
File "/home/wanqi/Desktop/faceswap-master/lib/multithreading.py", line 460, in join
raise thread.err[1].with_traceback(thread.err[2])
File "/home/wanqi/Desktop/faceswap-master/lib/multithreading.py", line 390, in run
self._target(*self._args, **self._kwargs)
File "/home/wanqi/Desktop/faceswap-master/scripts/train.py", line 149, in training
raise err
File "/home/wanqi/Desktop/faceswap-master/scripts/train.py", line 138, in training
trainer = self.load_trainer(model)
File "/home/wanqi/Desktop/faceswap-master/scripts/train.py", line 197, in load_trainer
self.args.batch_size)
File "/home/wanqi/Desktop/faceswap-master/plugins/train/trainer/_base.py", line 51, in __init__
self.process_training_opts()
File "/home/wanqi/Desktop/faceswap-master/plugins/train/trainer/_base.py", line 96, in process_training_opts
landmarks = Landmarks(self.model.training_opts).landmarks
File "/home/wanqi/Desktop/faceswap-master/plugins/train/trainer/_base.py", line 604, in __init__
self.landmarks = self.get_alignments()
File "/home/wanqi/Desktop/faceswap-master/plugins/train/trainer/_base.py", line 617, in get_alignments
serializer=serializer)
File "/home/wanqi/Desktop/faceswap-master/lib/alignments.py", line 36, in __init__
self.data = self.load()
File "/home/wanqi/Desktop/faceswap-master/lib/alignments.py", line 121, in load
"{}".format(self.file))
ValueError: Error: Alignments file not found at /home/wanqi/Desktop/faceswap-master/faces/source/alignments.json
05/22/2019 09:58:39 CRITICAL An unexpected crash has occurred. Crash report written to /home/wanqi/Desktop/faceswap-master/crash_report.2019.05.22.095829236316.log. Please verify you are running the latest version of faceswap before reporting
05/22/2019 09:58:28 MainProcess training_0 _base <lambda> VERBOSE leaky_re_lu_60[0][0]
05/22/2019 09:58:28 MainProcess training_0 _base <lambda> VERBOSE __________________________________________________________________________________________________
05/22/2019 09:58:28 MainProcess training_0 _base <lambda> VERBOSE leaky_re_lu_63 (LeakyReLU) (None, 8, 8, 512) 0 add_14[0][0]
05/22/2019 09:58:28 MainProcess training_0 _base <lambda> VERBOSE __________________________________________________________________________________________________
05/22/2019 09:58:28 MainProcess training_0 _base <lambda> VERBOSE conv2d_54 (Conv2D) (None, 4, 4, 1024) 13108224 leaky_re_lu_63[0][0]
05/22/2019 09:58:28 MainProcess training_0 _base <lambda> VERBOSE __________________________________________________________________________________________________
05/22/2019 09:58:28 MainProcess training_0 _base <lambda> VERBOSE leaky_re_lu_64 (LeakyReLU) (None, 4, 4, 1024) 0 conv2d_54[0][0]
05/22/2019 09:58:28 MainProcess training_0 _base <lambda> VERBOSE ==================================================================================================
05/22/2019 09:58:28 MainProcess training_0 _base <lambda> VERBOSE Total params: 29,604,608
05/22/2019 09:58:28 MainProcess training_0 _base <lambda> VERBOSE Trainable params: 29,604,608
05/22/2019 09:58:28 MainProcess training_0 _base <lambda> VERBOSE Non-trainable params: 0
05/22/2019 09:58:28 MainProcess training_0 _base <lambda> VERBOSE __________________________________________________________________________________________________
05/22/2019 09:58:28 MainProcess training_0 _base compile_predictors DEBUG Compiling Predictors
05/22/2019 09:58:28 MainProcess training_0 _base get_optimizer DEBUG Optimizer kwargs: {'lr': 5e-05, 'beta_1': 0.5, 'beta_2': 0.999}
05/22/2019 09:58:28 MainProcess training_0 _base loss_function VERBOSE Using DSSIM Loss
05/22/2019 09:58:28 MainProcess training_0 _base loss_function VERBOSE Penalizing mask for Loss
05/22/2019 09:58:28 MainProcess training_0 _base mask_loss_function VERBOSE Using Mean Squared Error Loss for mask
05/22/2019 09:58:28 MainProcess training_0 _base add_session_loss_names DEBUG Adding session loss_names. (side: 'a', loss_names: ['total_loss', 'loss', 'mask_loss']
05/22/2019 09:58:28 MainProcess training_0 _base add_session_loss_names DEBUG Adding session loss_names. (side: 'b', loss_names: ['total_loss', 'loss', 'mask_loss']
05/22/2019 09:58:28 MainProcess training_0 _base compile_predictors DEBUG Compiled Predictors. Losses: ['total_loss', 'loss', 'mask_loss']
05/22/2019 09:58:28 MainProcess training_0 _base set_training_data DEBUG Setting training data
05/22/2019 09:58:28 MainProcess training_0 _base calculate_coverage_ratio DEBUG Requested coverage_ratio: 0.625
05/22/2019 09:58:28 MainProcess training_0 _base calculate_coverage_ratio DEBUG Final coverage_ratio: 0.625
05/22/2019 09:58:28 MainProcess training_0 _base set_training_data DEBUG Set training data: {'alignments': {'a': '/home/wanqi/Desktop/faceswap-master/faces/source/alignments.json', 'b': '/home/wanqi/Desktop/faceswap-master/faces/lyf/alignments.json'}, 'preview_scaling': 0.5, 'warp_to_landmarks': False, 'no_flip': False, 'pingpong': False, 'training_size': 256, 'no_logs': False, 'mask_type': 'components', 'coverage_ratio': 0.625, 'preview_images': 14}
05/22/2019 09:58:28 MainProcess training_0 _base __init__ DEBUG Initialized ModelBase (Model)
05/22/2019 09:58:28 MainProcess training_0 realface __init__ DEBUG Initialized Model
05/22/2019 09:58:28 MainProcess training_0 train load_model DEBUG Loaded Model
05/22/2019 09:58:28 MainProcess training_0 train load_trainer DEBUG Loading Trainer
05/22/2019 09:58:28 MainProcess training_0 plugin_loader _import INFO Loading Trainer from Original plugin...
05/22/2019 09:58:28 MainProcess training_0 _base __init__ DEBUG Initializing TrainerBase: (model: '<plugins.train.model.realface.Model object at 0x7f233af0e0b8>', batch_size: 64)
05/22/2019 09:58:28 MainProcess training_0 _base add_session_batchsize DEBUG Adding session batchsize: 64
05/22/2019 09:58:28 MainProcess training_0 _base process_training_opts DEBUG {'alignments': {'a': '/home/wanqi/Desktop/faceswap-master/faces/source/alignments.json', 'b': '/home/wanqi/Desktop/faceswap-master/faces/lyf/alignments.json'}, 'preview_scaling': 0.5, 'warp_to_landmarks': False, 'no_flip': False, 'pingpong': False, 'training_size': 256, 'no_logs': False, 'mask_type': 'components', 'coverage_ratio': 0.625, 'preview_images': 14}
05/22/2019 09:58:28 MainProcess training_0 _base landmarks_required DEBUG True
05/22/2019 09:58:28 MainProcess training_0 _base __init__ DEBUG Initializing Landmarks: (training_opts: '{'alignments': {'a': '/home/wanqi/Desktop/faceswap-master/faces/source/alignments.json', 'b': '/home/wanqi/Desktop/faceswap-master/faces/lyf/alignments.json'}, 'preview_scaling': 0.5, 'warp_to_landmarks': False, 'no_flip': False, 'pingpong': False, 'training_size': 256, 'no_logs': False, 'mask_type': 'components', 'coverage_ratio': 0.625, 'preview_images': 14}')
05/22/2019 09:58:28 MainProcess training_0 alignments __init__ DEBUG Initializing Alignments: (folder: '/home/wanqi/Desktop/faceswap-master/faces/source', filename: 'alignments', serializer: 'json')
05/22/2019 09:58:28 MainProcess training_0 alignments get_serializer DEBUG Getting serializer: (filename: 'alignments', serializer: 'json')
05/22/2019 09:58:28 MainProcess training_0 alignments get_serializer DEBUG Serializer set from argument: 'json'
05/22/2019 09:58:28 MainProcess training_0 alignments get_serializer VERBOSE Using 'json' serializer for alignments
05/22/2019 09:58:28 MainProcess training_0 alignments get_location DEBUG Getting location: (folder: '/home/wanqi/Desktop/faceswap-master/faces/source', filename: 'alignments')
05/22/2019 09:58:28 MainProcess training_0 alignments get_location DEBUG File extension set from serializer: 'json'
05/22/2019 09:58:28 MainProcess training_0 alignments get_location VERBOSE Alignments filepath: '/home/wanqi/Desktop/faceswap-master/faces/source/alignments.json'
05/22/2019 09:58:28 MainProcess training_0 alignments load DEBUG Loading alignments
05/22/2019 09:58:28 MainProcess training_0 multithreading run DEBUG Error in thread (training_0): Error: Alignments file not found at /home/wanqi/Desktop/faceswap-master/faces/source/alignments.json
05/22/2019 09:58:29 MainProcess MainThread train monitor DEBUG Thread error detected
05/22/2019 09:58:29 MainProcess MainThread train monitor DEBUG Closed Monitor
05/22/2019 09:58:29 MainProcess MainThread train end_thread DEBUG Ending Training thread
05/22/2019 09:58:29 MainProcess MainThread train end_thread CRITICAL Error caught! Exiting...
05/22/2019 09:58:29 MainProcess MainThread multithreading join DEBUG Joining Threads: 'training'
05/22/2019 09:58:29 MainProcess MainThread multithreading join DEBUG Joining Thread: 'training_0'
05/22/2019 09:58:29 MainProcess MainThread multithreading join ERROR Caught exception in thread: 'training_0'
Traceback (most recent call last):
File "/home/wanqi/Desktop/faceswap-master/lib/cli.py", line 110, in execute_script
process.process()
File "/home/wanqi/Desktop/faceswap-master/scripts/train.py", line 98, in process
self.end_thread(thread, err)
File "/home/wanqi/Desktop/faceswap-master/scripts/train.py", line 123, in end_thread
thread.join()
File "/home/wanqi/Desktop/faceswap-master/lib/multithreading.py", line 460, in join
raise thread.err[1].with_traceback(thread.err[2])
File "/home/wanqi/Desktop/faceswap-master/lib/multithreading.py", line 390, in run
self._target(*self._args, **self._kwargs)
File "/home/wanqi/Desktop/faceswap-master/scripts/train.py", line 149, in training
raise err
File "/home/wanqi/Desktop/faceswap-master/scripts/train.py", line 138, in training
trainer = self.load_trainer(model)
File "/home/wanqi/Desktop/faceswap-master/scripts/train.py", line 197, in load_trainer
self.args.batch_size)
File "/home/wanqi/Desktop/faceswap-master/plugins/train/trainer/_base.py", line 51, in __init__
self.process_training_opts()
File "/home/wanqi/Desktop/faceswap-master/plugins/train/trainer/_base.py", line 96, in process_training_opts
landmarks = Landmarks(self.model.training_opts).landmarks
File "/home/wanqi/Desktop/faceswap-master/plugins/train/trainer/_base.py", line 604, in __init__
self.landmarks = self.get_alignments()
File "/home/wanqi/Desktop/faceswap-master/plugins/train/trainer/_base.py", line 617, in get_alignments
serializer=serializer)
File "/home/wanqi/Desktop/faceswap-master/lib/alignments.py", line 36, in __init__
self.data = self.load()
File "/home/wanqi/Desktop/faceswap-master/lib/alignments.py", line 121, in load
"{}".format(self.file))
ValueError: Error: Alignments file not found at /home/wanqi/Desktop/faceswap-master/faces/source/alignments.json
============ System Information ============
encoding: ANSI_X3.4-1968
git_branch: Not Found
git_commits: Not Found
gpu_cuda: 8.0
gpu_cudnn: Not Found. Check Conda packages for Conda cuDNN
gpu_devices: GPU_0: Tesla K80, GPU_1: Tesla K80, GPU_2: Tesla K80, GPU_3: Tesla K80, GPU_4: Tesla K80, GPU_5: Tesla K80, GPU_6: Tesla K80, GPU_7: Tesla K80, GPU_8: Tesla K80, GPU_9: Tesla K80
gpu_devices_active: GPU_0, GPU_1, GPU_2, GPU_3, GPU_4, GPU_5, GPU_6, GPU_7, GPU_8, GPU_9
gpu_driver: 375.66
gpu_vram: GPU_0: 11439MB, GPU_1: 11439MB, GPU_2: 11439MB, GPU_3: 11439MB, GPU_4: 11439MB, GPU_5: 11439MB, GPU_6: 11439MB, GPU_7: 11439MB, GPU_8: 11439MB, GPU_9: 11439MB
os_machine: x86_64
os_platform: Linux-3.19.0-80-generic-x86_64-with-debian-jessie-sid
os_release: 3.19.0-80-generic
py_command: faceswap.py train -A faces/source -B faces/lyf -m models/lyf-128/ -p -t realface
py_conda_version: conda 4.6.14
py_implementation: CPython
py_version: 3.6.8
py_virtual_env: True
sys_cores: 40
sys_processor: x86_64
sys_ram: Total: 257847MB, Available: 204245MB, Used: 41256MB, Free: 44618MB
=============== Pip Packages ===============
absl-py==0.6.1
alabaster==0.7.10
alfred==0.2
alfred-py==2.4.8
anaconda-client==1.6.14
anaconda-navigator==1.9.7
anaconda-project==0.8.2
ansimarkup==1.4.0
asn1crypto==0.24.0
astor==0.7.1
astroid==1.6.3
astropy==3.0.2
attrs==18.1.0
Babel==2.5.3
backcall==0.1.0
backports.shutil-get-terminal-size==1.0.0
backports.weakref==1.0rc1
beautifulsoup4==4.6.0
better-exceptions-fork==0.2.1.post6
bitarray==0.8.1
bkcharts==0.2
blaze==0.11.3
bleach==2.1.3
blessings==1.7
bokeh==0.12.16
boto==2.48.0
Bottleneck==1.2.1
certifi==2019.3.9
cffi==1.11.5
chardet==3.0.4
click==6.7
cloudpickle==0.5.3
clyent==1.2.2
colorama==0.3.9
conda==4.6.14
conda-build==3.10.5
conda-verify==2.0.0
contextlib2==0.5.5
cryptography==2.4.2
cycler==0.10.0
Cython==0.28.2
cytoolz==0.9.0.1
dask==0.17.5
datashape==0.5.4
decorator==4.3.0
Deprecated==1.2.5
distributed==1.21.8
dlib==19.17.0
docutils==0.14
entrypoints==0.2.3
et-xmlfile==1.0.1
face-recognition==1.2.3
face-recognition-models==0.3.0
fastcache==1.0.2
fastcluster==1.1.25
ffmpeg==1.4
ffmpy==0.2.2
filelock==3.0.4
Flask==1.0.2
Flask-Cors==3.0.4
future==0.17.1
gast==0.2.1
gevent==1.3.0
glob2==0.6
gmpy2==2.0.8
gpustat==0.5.0
greenlet==0.4.13
grpcio==1.16.1
h5py==2.9.0
heapdict==1.0.0
html5lib==1.0.1
idna==2.6
imageio==2.3.0
imageio-ffmpeg==0.3.0
imagesize==1.0.0
ipykernel==4.8.2
ipython==6.4.0
ipython-genutils==0.2.0
ipywidgets==7.2.1
isort==4.3.4
itsdangerous==0.24
jdcal==1.4
jedi==0.12.0
Jinja2==2.10
joblib==0.13.2
jsonschema==2.6.0
jupyter==1.0.0
jupyter-client==5.2.3
jupyter-console==5.2.0
jupyter-core==4.4.0
jupyterlab==0.32.1
jupyterlab-launcher==0.10.5
Keras==2.2.4
Keras-Applications==1.0.7
Keras-Preprocessing==1.0.9
kiwisolver==1.0.1
lazy-object-proxy==1.3.1
leveldb==0.20
llvmlite==0.23.1
lmdb==0.94
locket==0.2.0
loguru==0.2.5
lxml==4.2.1
Markdown==3.0.1
MarkupSafe==1.0
matplotlib==2.2.2
mccabe==0.6.1
mistune==0.8.3
mkl-fft==1.0.12
mkl-random==1.0.2
mock==2.0.0
more-itertools==4.1.0
mpmath==1.0.0
msgpack==0.6.1
msgpack-python==0.5.6
multipledispatch==0.5.0
navigator-updater==0.2.1
nbconvert==5.3.1
nbformat==4.4.0
networkx==2.1
nltk==3.3
nose==1.3.7
notebook==5.5.0
numba==0.38.0
numexpr==2.6.9
numpy==1.16.2
numpydoc==0.8.0
nvidia-ml-py3==7.352.0
odo==0.5.1
olefile==0.45.1
opencv-contrib-python==4.0.0.21
opencv-python==4.1.0.25
openpyxl==2.5.3
packaging==17.1
pandas==0.23.0
pandocfilters==1.4.2
parso==0.2.0
partd==0.3.8
path.py==11.0.1
pathlib==1.0.1
pathlib2==2.3.2
patsy==0.5.0
pbr==5.1.3
pep8==1.7.1
pexpect==4.5.0
pickleshare==0.7.4
Pillow==6.0.0
pkginfo==1.4.2
pluggy==0.6.0
ply==3.11
prometheus-client==0.4.2
prompt-toolkit==1.0.15
protobuf==3.6.1
psutil==5.6.2
ptyprocess==0.5.2
py==1.5.3
pycocotools==2.0
pycodestyle==2.4.0
pycosat==0.6.3
pycparser==2.18
pycrypto==2.6.1
pycurl==7.43.0.2
pyflakes==1.6.0
Pygments==2.2.0
pylint==1.8.4
pyodbc==4.0.23
pyOpenSSL==18.0.0
pyparsing==2.2.0
PySocks==1.6.8
pytest==3.5.1
pytest-arraydiff==0.2
pytest-astropy==0.3.0
pytest-doctestplus==0.1.3
pytest-openfiles==0.3.0
pytest-remotedata==0.2.1
python-dateutil==2.7.3
python-gflags==3.1.2
pytz==2018.4
PyWavelets==0.5.2
PyYAML==3.12
pyzmq==17.0.0
QtAwesome==0.4.4
qtconsole==4.3.1
QtPy==1.4.1
requests==2.18.4
rope==0.10.7
ruamel-yaml==0.15.35
scandir==1.5
scikit-image==0.15.0
scikit-learn==0.21.1
scipy==1.2.1
seaborn==0.8.1
Send2Trash==1.5.0
simplegeneric==0.8.1
singledispatch==3.4.0.3
six==1.11.0
snowballstemmer==1.2.1
sortedcollections==0.6.1
sortedcontainers==1.5.10
Sphinx==1.7.4
sphinxcontrib-websupport==1.0.1
spyder==3.2.8
SQLAlchemy==1.2.7
statsmodels==0.9.0
sympy==1.1.1
tables==3.4.3
tblib==1.3.2
tensorboard==1.12.2
tensorflow==1.12.0
tensorflow-estimator==1.13.0
tensorflow-tensorboard==1.5.1
termcolor==1.1.0
terminado==0.8.1
testpath==0.3.1
toolz==0.9.0
toposort==1.5
torch==1.0.1
tornado==5.0.2
tqdm==4.31.1
traitlets==4.3.2
typing==3.6.4
unicodecsv==0.14.1
urllib3==1.22
wcwidth==0.1.7
web.py==0.40.dev0
webencodings==0.5.1
Werkzeug==0.14.1
widgetsnbextension==3.2.1
wrapt==1.10.11
xlrd==1.1.0
XlsxWriter==1.0.4
xlwt==1.3.0
zict==0.1.3
============== Conda Packages ==============
# packages in environment at /home/wanqi/anaconda3/envs/chineseocr:
#
# Name Version Build Channel
_tflow_select 2.1.0 gpu
absl-py 0.7.1 py36_0
astor 0.7.1 py36_0
backcall 0.1.0 py36_0
backports 1.0 py36_1
backports.weakref 1.0rc1 py36_0
blas 1.0 mkl
bleach 1.5.0 py36_0
bzip2 1.0.6 h14c3975_5
c-ares 1.15.0 h7b6447c_1
ca-certificates 2019.1.23 0
cairo 1.14.12 h8948797_3
certifi 2019.3.9 py36_0
cffi 1.11.5 py36he75722e_1
cloudpickle 1.0.0 py_0
cudatoolkit 9.0 h13b8566_0
cudnn 7.3.1 cuda9.0_0
cupti 9.0.176 0
cycler 0.10.0 py36_0
cytoolz 0.9.0.1 py36h14c3975_1
dask-core 1.2.2 py_0
dbus 1.13.2 h714fa37_1
decorator 4.3.0 py36_0
entrypoints 0.2.3 py36_2
expat 2.2.6 he6710b0_0
ffmpeg 4.0 hcdf2ecd_0
fontconfig 2.13.0 h9420a91_0
freeglut 3.0.0 hf484d3e_5
freetype 2.9.1 h8a8886c_1
gast 0.2.2 py36_0
glib 2.56.2 hd408876_0
gmp 6.1.2 h6c8ec71_1
graphite2 1.3.13 h23475e2_0
grpcio 1.16.1 py36hf8bcb03_1
gst-plugins-base 1.14.0 hbbd80ab_1
gstreamer 1.14.0 hb453b48_1
h5py 2.9.0 py36h7918eee_0
harfbuzz 1.8.8 hffaf4a1_0
hdf5 1.10.4 hb1b8bf9_0
html5lib 0.9999999 py36_0
icu 58.2 h9c2bf20_1
imageio 2.5.0 py36_0
intel-openmp 2019.0 118
ipykernel 5.1.0 py36h39e3cac_0
ipython 7.1.1 py36h39e3cac_0
ipython_genutils 0.2.0 py36_0
ipywidgets 7.4.2 py36_0
jasper 2.0.14 h07fcdf6_1
jedi 0.13.1 py36_0
jinja2 2.10 py36_0
joblib 0.13.2 py36_0
jpeg 9b h024ee3a_2
jsonschema 2.6.0 py36_0
jupyter 1.0.0 py36_7
jupyter_client 5.2.3 py36_0
jupyter_console 6.0.0 py36_0
jupyter_core 4.4.0 py36_0
keras-applications 1.0.7 py_0
keras-base 2.2.4 py36_0
keras-gpu 2.2.4 0
keras-preprocessing 1.0.9 py_0
kiwisolver 1.1.0 py36he6710b0_0
libedit 3.1.20170329 h6b74fdf_2
libffi 3.2.1 hd88cf55_4
libgcc 7.2.0 h69d50b8_2
libgcc-ng 8.2.0 hdf63c60_1
libgfortran 3.0.0 1
libgfortran-ng 7.3.0 hdf63c60_0
libglu 9.0.0 hf484d3e_1
libopenblas 0.3.3 h5a2b251_3
libopus 1.3 h7b6447c_0
libpng 1.6.35 hbc83047_0
libprotobuf 3.7.1 hd408876_0
libsodium 1.0.16 h1bed415_0
libstdcxx-ng 8.2.0 hdf63c60_1
libtiff 4.0.10 h2733197_2
libuuid 1.0.3 h1bed415_2
libvpx 1.7.0 h439df22_0
libxcb 1.13 h1bed415_1
libxml2 2.9.8 h26e45fe_1
markdown 3.1 py36_0
markupsafe 1.0 py36h14c3975_1
matplotlib 2.2.2 py36hb69df0a_2
mistune 0.8.4 py36h7b6447c_0
mkl 2019.3 199
mkl_fft 1.0.12 py36ha843d7b_0
mkl_random 1.0.2 py36hd81dba3_0
mock 2.0.0 py36_0
nbconvert 5.3.1 py36_0
nbformat 4.4.0 py36_0
nccl 1.3.5 cuda9.0_0
ncurses 6.1 hf484d3e_0
networkx 2.3 py_0
ninja 1.9.0 py36hfd86e86_0
notebook 5.7.0 py36_0
numpy 1.16.3 py36h7e9f1db_0
numpy-base 1.16.3 py36hde5b4d6_0
olefile 0.46 py36_0
openblas 0.2.19 0
openssl 1.1.1b h7b6447c_1
pandoc 2.2.3.2 0
pandocfilters 1.4.2 py36_1
parso 0.3.1 py36_0
pathlib 1.0.1 py36_1
pbr 5.1.3 py_0
pcre 8.42 h439df22_0
pexpect 4.6.0 py36_0
pickleshare 0.7.5 py36_0
pillow 6.0.0 py36h34e0f95_0
pip 18.1 py36_0
pixman 0.38.0 h7b6447c_0
prometheus_client 0.4.2 py36_0
prompt_toolkit 2.0.7 py36_0
protobuf 3.7.1 py36he6710b0_0
psutil 5.6.2 py36h7b6447c_0
ptyprocess 0.6.0 py36_0
pycparser 2.19 py36_0
pygments 2.2.0 py36_0
pyparsing 2.4.0 py_0
pyqt 5.9.2 py36h05f1152_2
python 3.6.8 h0371630_0
python-dateutil 2.7.5 py36_0
pytorch 1.0.1 cuda90py36h8b0c50b_0
pytz 2019.1 py_0
pywavelets 1.0.3 py36hdd07704_1
pyyaml 5.1 py36h7b6447c_0
pyzmq 17.1.2 py36h14c3975_0
qt 5.9.7 h5867ecd_1
qtconsole 4.4.2 py36_0
readline 7.0 h7b6447c_5
scikit-image 0.15.0 py36he6710b0_0
scikit-learn 0.21.1 py36hd81dba3_0
scipy 1.2.1 py36h7c811a0_0
send2trash 1.5.0 py36_0
setuptools 40.5.0 py36_0
sip 4.19.8 py36hf484d3e_0
six 1.11.0 py36_1
sqlite 3.26.0 h7b6447c_0
tensorboard 1.12.2 py36he6710b0_0
tensorflow 1.12.0 gpu_py36he68c306_0
tensorflow-base 1.12.0 gpu_py36h8e0ae2d_0
tensorflow-estimator 1.13.0 py_0
tensorflow-gpu 1.12.0 h0d30ee6_0
tensorflow-gpu-base 1.7.0 py36hcdda91b_1
tensorflow-tensorboard 1.5.1 py36hf484d3e_1
termcolor 1.1.0 py36_1
terminado 0.8.1 py36_1
testpath 0.4.2 py36_0
tk 8.6.8 hbc83047_0
toolz 0.9.0 py36_0
tornado 5.1.1 py36h7b6447c_0
tqdm 4.31.1 py36_1
traitlets 4.3.2 py36_0
wcwidth 0.1.7 py36_0
webencodings 0.5.1 py36_1
werkzeug 0.15.2 py_0
wheel 0.32.2 py36_0
widgetsnbextension 3.4.2 py36_0
xz 5.2.4 h14c3975_4
yaml 0.1.7 had09818_2
zeromq 4.2.5 hf484d3e_1
zlib 1.2.11 ha838bed_2
zstd 1.3.7 h0b5b093_0
|
closed
|
2019-05-22T02:09:24Z
|
2019-06-01T13:50:36Z
|
https://github.com/deepfakes/faceswap/issues/735
|
[] |
Jinwanqi
| 3
|
autogluon/autogluon
|
computer-vision
| 4,852
|
[timeseries] Add sample_weight support to TimeSeriesPredictor
|
## Description
Add an option to customize the sample weight used by time series forecasting metrics.
There are two types of weighting that would be interesting:
- [ ] #4854
- [ ] #4855
## References
- [ ] Find examples in other forecasting libraries to find reasonable names in the API.
|
open
|
2025-01-30T16:36:23Z
|
2025-02-20T14:09:18Z
|
https://github.com/autogluon/autogluon/issues/4852
|
[
"enhancement",
"module: timeseries"
] |
shchur
| 0
|
sqlalchemy/alembic
|
sqlalchemy
| 603
|
Unable to specify relative revision in command.downgrade API
|
I'd like to execute a single downgrade (to previous revision) programmatically, so mimicking the command line functionality I did:
```python
alembic_cfg = Config(os.path.join(APP_ROOT, 'alembic.ini'))
command.downgrade(alembic_cfg, '-1')
```
Unfortunately it is not going through as expected:
```python
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/alembic/script/base.py", line 143, in _catch_revision_errors
yield
File "/usr/local/lib/python3.6/site-packages/alembic/script/base.py", line 348, in _downgrade_revs
current_rev, destination, select_for_downgrade=True)
File "/usr/local/lib/python3.6/site-packages/alembic/script/revision.py", line 556, in iterate_revisions
inclusive, assert_relative_length
File "/usr/local/lib/python3.6/site-packages/alembic/script/revision.py", line 528, in _relative_iterate
"produce %d migrations" % (destination, abs(relative)))
alembic.script.revision.RevisionError: Relative revision -1 didn't produce 1 migrations
```
anything am I doing wrong or it's just not supported?
|
closed
|
2019-09-25T12:57:27Z
|
2021-04-25T15:59:19Z
|
https://github.com/sqlalchemy/alembic/issues/603
|
[
"question"
] |
babaMar
| 6
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.