repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
sczhou/CodeFormer
|
pytorch
| 174
|
Free
|
Monthly 100 free
|
closed
|
2023-03-11T06:30:35Z
|
2023-04-19T08:51:25Z
|
https://github.com/sczhou/CodeFormer/issues/174
|
[] |
jueahamed
| 2
|
d2l-ai/d2l-en
|
machine-learning
| 2,595
|
The content is outdated
|
I found the book having very good content for the topics it covers. But the book stopped at GANs. Many not-very-new topics like YOLO, Diffusion were never discussed. I've seen some opened issues mentioned this several years ago but it seems no contents have been added. Will the book continue to be updated or it's archived?
|
open
|
2024-03-31T03:33:11Z
|
2024-12-15T15:41:30Z
|
https://github.com/d2l-ai/d2l-en/issues/2595
|
[] |
hiepdang-ml
| 1
|
chmp/ipytest
|
pytest
| 45
|
database access with pytest-django
|
I'm running into some problems with Django database access. pytest-django does [some trickery](https://pytest-django.readthedocs.io/en/latest/database.html) with the database so that it can roll back any changes made by a test to the test database. The django_db mark informs pytest that a test will be accessing the test database.
```
%%run_pytest -s
import pytest
import ipytest
from django.contrib.contenttypes.models import ContentType
@pytest.mark.django_db
def test_django():
for content_type in ContentType.objects.all():
print(content_type)
assert True
```
Here's the error I'm getting:
```
E
==================================== ERRORS ====================================
________________________ ERROR at setup of test_django _________________________
self = <django.db.backends.postgresql.base.DatabaseWrapper object at 0x7fb4c35ec730>
@contextmanager
def _nodb_cursor(self):
"""
Return a cursor from an alternative connection to be used when there is
no need to access the main database, specifically for test db
creation/deletion. This also prevents the production database from
being exposed to potential child threads while (or after) the test
database is destroyed. Refs #10868, #17786, #16969.
"""
conn = self.__class__({**self.settings_dict, 'NAME': None}, alias=NO_DB_ALIAS)
try:
> with conn.cursor() as cursor:
/opt/alex/pyvenv/lib/python3.8/site-packages/django/db/backends/base/base.py:620:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (<django.db.backends.postgresql.base.DatabaseWrapper object at 0x7fb4bf2930a0>,)
kwargs = {}
event_loop = <_UnixSelectorEventLoop running=True closed=False debug=False>
@functools.wraps(func)
def inner(*args, **kwargs):
if not os.environ.get('DJANGO_ALLOW_ASYNC_UNSAFE'):
# Detect a running event loop in this thread.
try:
event_loop = asyncio.get_event_loop()
except RuntimeError:
pass
else:
if event_loop.is_running():
> raise SynchronousOnlyOperation(message)
E django.core.exceptions.SynchronousOnlyOperation: You cannot call this from an async context - use a thread or sync_to_async.
/opt/alex/pyvenv/lib/python3.8/site-packages/django/utils/asyncio.py:24: SynchronousOnlyOperation
During handling of the above exception, another exception occurred:
request = <SubRequest '_django_db_marker' for <Function test_django>>
@pytest.fixture(autouse=True)
def _django_db_marker(request):
"""Implement the django_db marker, internal to pytest-django.
This will dynamically request the ``db``, ``transactional_db`` or
``django_db_reset_sequences`` fixtures as required by the django_db marker.
"""
marker = request.node.get_closest_marker("django_db")
if marker:
transaction, reset_sequences = validate_django_db(marker)
if reset_sequences:
request.getfixturevalue("django_db_reset_sequences")
elif transaction:
request.getfixturevalue("transactional_db")
else:
> request.getfixturevalue("db")
/opt/alex/pyvenv/lib/python3.8/site-packages/pytest_django/plugin.py:513:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/alex/pyvenv/lib/python3.8/site-packages/pytest_django/fixtures.py:105: in django_db_setup
db_cfg = setup_databases(
/opt/alex/pyvenv/lib/python3.8/site-packages/django/test/utils.py:170: in setup_databases
connection.creation.create_test_db(
/opt/alex/pyvenv/lib/python3.8/site-packages/django/db/backends/base/creation.py:55: in create_test_db
self._create_test_db(verbosity, autoclobber, keepdb)
/opt/alex/pyvenv/lib/python3.8/site-packages/django/db/backends/base/creation.py:176: in _create_test_db
with self._nodb_cursor() as cursor:
/opt/alex/pyvenv/build/Python-3.8.3/lib/python3.8/contextlib.py:113: in __enter__
return next(self.gen)
/opt/alex/pyvenv/lib/python3.8/site-packages/django/db/backends/postgresql/base.py:298: in _nodb_cursor
with super()._nodb_cursor() as cursor:
/opt/alex/pyvenv/build/Python-3.8.3/lib/python3.8/contextlib.py:113: in __enter__
return next(self.gen)
/opt/alex/pyvenv/lib/python3.8/site-packages/django/db/backends/base/base.py:623: in _nodb_cursor
conn.close()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (<django.db.backends.postgresql.base.DatabaseWrapper object at 0x7fb4bf2930a0>,)
kwargs = {}
event_loop = <_UnixSelectorEventLoop running=True closed=False debug=False>
@functools.wraps(func)
def inner(*args, **kwargs):
if not os.environ.get('DJANGO_ALLOW_ASYNC_UNSAFE'):
# Detect a running event loop in this thread.
try:
event_loop = asyncio.get_event_loop()
except RuntimeError:
pass
else:
if event_loop.is_running():
> raise SynchronousOnlyOperation(message)
E django.core.exceptions.SynchronousOnlyOperation: You cannot call this from an async context - use a thread or sync_to_async.
/opt/alex/pyvenv/lib/python3.8/site-packages/django/utils/asyncio.py:24: SynchronousOnlyOperation
=========================== short test summary info ============================
ERROR tmp55mtb4ih.py::test_django - django.core.exceptions.SynchronousOnlyOpe...
1 error in 0.21s
```
|
closed
|
2020-07-23T07:45:52Z
|
2020-07-24T06:24:01Z
|
https://github.com/chmp/ipytest/issues/45
|
[] |
highpost
| 3
|
falconry/falcon
|
api
| 1,435
|
Improve falcon-print-routes tool
|
`falcon-print-routes` is a simple (albeit somewhat spartan) tool that may come handy to list API routes.
We could polish it, and expose more widely:
* Advertise it in the documentation (currently it is only mentioned in the `1.1.0` changelog)
* Make it aware of route suffixes, a new feature in 2.0
* Expose interface to be able to list routes programmatically (suggested by @CaselIT )
* Make it clear that the tool only works with the standard `CompiledRouter` (suggested by @CaselIT )
* Anything else? (suggestions welcome!)
|
closed
|
2019-02-09T22:39:10Z
|
2020-03-30T21:08:08Z
|
https://github.com/falconry/falcon/issues/1435
|
[
"good first issue",
"enhancement",
"needs contributor"
] |
vytas7
| 11
|
jupyter-book/jupyter-book
|
jupyter
| 1,686
|
Image ratios not maintained on small screens
|
### Describe the bug
**context**
When I view a page with an image on mobile (and the image has a `height` property set).
**expectation**
I expect the image to be scaled in *both* dimensions such that the image ratio is maintained.
**bug**
But instead, the height value is maintained, distorting the image ratio.
See screenshots of the issue here: https://github.com/alan-turing-institute/the-turing-way/issues/2310#issue-1182200758
I tried the solution of removing all of our height tags, but that makes some images very large on desktop (see final screenshot here: https://github.com/alan-turing-institute/the-turing-way/pull/2311#issuecomment-1083475868).
Ideally, I would like a 'max height' tag, or for the default behavior to be to maintain aspect ratios. I'm very much welcome to alternative suggestions. The ~~boring~~ hacky solution is to only use height tags for portrait orientation images and accept that they might get slightly distorted on mobile. Or I imagine we could use the `scale` attribute instead of the `height` one and that might work. **But** it would be preferable to be able to set all images to a standard configuration, rather than have to either: set height depending on orientation, or calculate scale manually per image.
### Reproduce the bug
See above screenshots
### List your environment
_No response_
|
open
|
2022-03-30T18:51:24Z
|
2022-09-16T22:52:06Z
|
https://github.com/jupyter-book/jupyter-book/issues/1686
|
[
"bug",
":label: sphinx-book-theme"
] |
da5nsy
| 4
|
pallets/flask
|
flask
| 4,914
|
Logging bug when two Flask apps are initialized
|
I encountered a strange log related bug in the unit tests of a project I am working on. Here is the most minimalist snippet I could write to reproduce the bug.
The `run` method configures logs to be saved in a file, initializes an empty Flask application produces one log entry. The snippet runs the method twice, with two different log paths. The first time the method is ran, a log is actually produced in the expected log file, but not the second time.
```python
import os
import logging.config
from flask import Flask
def run(log_path):
logging.config.dictConfig(
{
"version": 1,
"formatters": {
"default": {
"format": "%(message)s",
}
},
"handlers": {
"default": {
"class": "logging.FileHandler",
"filename": log_path,
"formatter": "default",
},
},
"root": {"level": "DEBUG", "handlers": ["default"]},
}
)
app = Flask(__name__)
app.logger.info("FOOBAR")
with open(log_path) as fd:
assert fd.read(), "log file is empty"
os.remove(log_path)
run("/tmp/firstlog.txt")
run("/tmp/secondlog.txt")
```
```python
Traceback (most recent call last):
File "/path/to/flask_logging.py", line 37, in <module>
run("/tmp/secondlog.txt")
File "/path/to/flask_logging.py", line 31, in run
assert fd.read(), "log file is empty"
AssertionError: log file is empty
```
This is so strange because the very same piece of code is executed twice and behave differently. It looks like the first run has side-effects on the second run.
The logging documentation indicates that `If app.logger is accessed before logging is configured, it will add a default handler.` This sound like this might be related, but in my case there are two completely different `Flask` objects.
I could solve this by different ways. None feels really right:
- Replacing `app.logger.info` with `logging.info`
- Giving a different `import_name` to the `Flask` objects on the two runs.
- Setting [disable_existing_loggers](https://docs.python.org/3/library/logging.config.html) to `False` in `dictConfig`.
Environment:
- Python version: 3.10
- Flask version: 2.2.2
|
closed
|
2022-12-21T17:49:49Z
|
2023-01-10T00:05:50Z
|
https://github.com/pallets/flask/issues/4914
|
[] |
azmeuk
| 1
|
pandas-dev/pandas
|
python
| 61,055
|
BUG: invalid result of reindex on columns after unstack with Period data #60980
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
series1 = pd.DataFrame(
[(0, "s2", pd.Period(2022)), (0, "s1", pd.Period(2021))],
columns=["A", "B", "C"]
).set_index(["A", "B"])["C"]
series2 = series1.astype(str)
print(series1.unstack("B").reindex(["s2"], axis=1))
print(series2.unstack("B").reindex(["s2"], axis=1))
```
### Issue Description
The example code prints
B s2
A
0 2021
B s2
A
0 2022
### Expected Behavior
Expect the result for both pd.Period and str data to be 2022:
B s2
A
0 2022
B s2
A
0 2022
(actually observed with older Pandas 2.0.3)
### Installed Versions
<details>
INSTALLED VERSIONS
commit : https://github.com/pandas-dev/pandas/commit/0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.11.10
python-bits : 64
OS : Linux
OS-release : 6.2.16
Version : https://github.com/pandas-dev/pandas/issues/1-NixOS SMP PREEMPT_DYNAMIC Tue Jan 1 00:00:00 UTC 1980
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 2.2.3
pytz : 2025.1
dateutil : 2.9.0.post0
pip : 24.0
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.1
qtpy : None
pyqt5 : None
</details>
|
closed
|
2025-03-04T18:11:53Z
|
2025-03-05T00:13:34Z
|
https://github.com/pandas-dev/pandas/issues/61055
|
[
"Bug",
"Reshaping"
] |
Pranav970
| 3
|
flairNLP/flair
|
pytorch
| 2,661
|
URL endpoints within flair to fetch model/files not working
|
**Describe the bug**
Failing to execute `TARSClassifier.load(model="tars-base")` when running first time, i.e. the `tars-base-v8.pt` file isn't already downloaded and cached locally.
When trying to load the model for first time, getting following error:
```
raise IOError(
OSError: HEAD request failed for url https://nlp.informatik.hu-berlin.de/resources/models/tars-base/tars-base-v8.pt with status code 503.
```
**To Reproduce**
Clear the `~/.flair/model/tars-base-v8.pt`, and try executing `TARSClassifier.load(model="tars-base")`
**Expected behavior**
The file should have downloaded and model loaded successfully.
**Environment (please complete the following information):**
- OS : Linux and OSX
- Version : flair >= 0.10
|
closed
|
2022-03-08T09:25:33Z
|
2024-04-02T11:04:33Z
|
https://github.com/flairNLP/flair/issues/2661
|
[
"bug"
] |
thekeshavgoel
| 4
|
Asabeneh/30-Days-Of-Python
|
python
| 554
|
Indentation issue in Day 9
|
The pseudo-code example for Nested Conditions in Day 9 has an indentation issue. Although this can't be compiled as it is no proper python, it should adhere to corect indentation. So, instead of
```
# syntax
if condition:
code
if condition:
code
```
it should read
```
# syntax
if condition:
code
if condition:
code
```
|
open
|
2024-07-11T17:55:31Z
|
2024-07-11T17:55:31Z
|
https://github.com/Asabeneh/30-Days-Of-Python/issues/554
|
[] |
jasperhabicht
| 0
|
nolar/kopf
|
asyncio
| 1,016
|
No RBAC permission to create events
|
### Keywords
documentation rbac events
### Problem
I followed the documentation around RBAC to create a Role and ClusterRole for an Operator I have been working on. Upon deployment, the Operator logged an error related to not having permissions to create events in the namespace where it was reconciling a CRD in.
The documentation mention events permissions as part of the Role, which is namespaced: https://github.com/nolar/kopf/blob/main/docs/deployment-rbac.yaml#L49-L52
Should the permissions for events not be moved to the ClusterRole, allowing the operator to publish events in all namespaces, not just its own? If yes, then I am more than happy to make a PR for this.
|
open
|
2023-03-20T15:42:30Z
|
2023-06-20T17:51:57Z
|
https://github.com/nolar/kopf/issues/1016
|
[
"question"
] |
Learloj
| 2
|
lorien/grab
|
web-scraping
| 138
|
Looks like mp_mode does not work under Python 2.7
|
```
and (self.network_result_queue.qsize()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/queues.py", line 143, in qsize
return self._maxsize - self._sem._semlock._get_value()
NotImplementedError
```
|
closed
|
2015-08-11T04:03:00Z
|
2015-08-31T13:39:19Z
|
https://github.com/lorien/grab/issues/138
|
[] |
oiwn
| 1
|
jonaswinkler/paperless-ng
|
django
| 186
|
Filters for "None" document type and correspondent
|
Can we add a toggle to the main Documents filter bar to filter by documents with no correspondent or no document type? I'd like to create a view for documents with no Document Type and/or no Correspondent, but the only options in the Document Type and Correspondent dropdowns currently are for the options you add, there is no None option in either dropdown, preventing creation of views for this purpose.
|
closed
|
2020-12-24T16:21:35Z
|
2020-12-31T01:33:37Z
|
https://github.com/jonaswinkler/paperless-ng/issues/186
|
[
"feature request",
"fixed in next release"
] |
CallMeTerdFerguson
| 9
|
sebp/scikit-survival
|
scikit-learn
| 68
|
Explain use of intercept in ComponentwiseGradientBoostingSurvivalAnalysis
|
Hey Guys,
I have an issue with predict() in GradientBoostingSurvivalAnalysis.
As an example:
model=GradientBoostingSurvivalAnalysis(n_estimators=1000,random_state=0)
model.fit(x_train,y_train) %xtrain(98,1400), y_train(98,2)
model.predict(x_test) %x_test(45,1400)
The output of predict() has 2D array(45,32) instead of (45,)!!
I don't have any idea why the output has the shape of (45,32)!!
|
closed
|
2019-06-17T12:19:21Z
|
2019-07-19T06:34:28Z
|
https://github.com/sebp/scikit-survival/issues/68
|
[] |
Marjaneh-T
| 4
|
dmlc/gluon-cv
|
computer-vision
| 1,131
|
#deleted
|
#deleted
|
closed
|
2020-01-03T19:32:02Z
|
2020-01-06T21:47:45Z
|
https://github.com/dmlc/gluon-cv/issues/1131
|
[] |
djaym7
| 0
|
brightmart/text_classification
|
nlp
| 145
|
how to convert the textCNN model to ONNX format?
|
open
|
2021-07-29T02:34:59Z
|
2021-07-29T02:34:59Z
|
https://github.com/brightmart/text_classification/issues/145
|
[] |
SeekPoint
| 0
|
|
identixone/fastapi_contrib
|
pydantic
| 182
|
Project Status
|
Hello, is this package still mantained?
|
open
|
2022-04-02T22:40:39Z
|
2022-04-02T22:40:39Z
|
https://github.com/identixone/fastapi_contrib/issues/182
|
[] |
pplanel
| 0
|
desec-io/desec-stack
|
rest-api
| 848
|
DKIM Key explodes page layout
|
Hello there,
i have done some imports of BIND zone files. Basically works as it should.
One entry in the zone file had a TXT record which contains a DKIM value. This means that the Subdomain column only shows the first letter of the DNS entry.

Is it possible to shorten the line in the Content column or make a line break?
Cheers,
Daniel
|
closed
|
2023-12-04T13:44:04Z
|
2023-12-20T14:09:20Z
|
https://github.com/desec-io/desec-stack/issues/848
|
[
"bug",
"gui"
] |
dwydler
| 8
|
huggingface/datasets
|
nlp
| 6,670
|
ValueError
|
### Describe the bug
ValueError Traceback (most recent call last)
[<ipython-input-11-9b99bc80ec23>](https://localhost:8080/#) in <cell line: 11>()
9 import numpy as np
10 import matplotlib.pyplot as plt
---> 11 from datasets import DatasetDict, Dataset
12 from transformers import AutoTokenizer, AutoModelForSequenceClassification
13 from transformers import Trainer, TrainingArguments
5 frames
[/usr/local/lib/python3.10/dist-packages/datasets/__init__.py](https://localhost:8080/#) in <module>
16 __version__ = "2.17.0"
17
---> 18 from .arrow_dataset import Dataset
19 from .arrow_reader import ReadInstruction
20 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in <module>
65
66 from . import config
---> 67 from .arrow_reader import ArrowReader
68 from .arrow_writer import ArrowWriter, OptimizedTypedSequence
69 from .data_files import sanitize_patterns
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_reader.py](https://localhost:8080/#) in <module>
27
28 import pyarrow as pa
---> 29 import pyarrow.parquet as pq
30 from tqdm.contrib.concurrent import thread_map
31
[/usr/local/lib/python3.10/dist-packages/pyarrow/parquet/__init__.py](https://localhost:8080/#) in <module>
18 # flake8: noqa
19
---> 20 from .core import *
[/usr/local/lib/python3.10/dist-packages/pyarrow/parquet/core.py](https://localhost:8080/#) in <module>
34 import pyarrow as pa
35 import pyarrow.lib as lib
---> 36 import pyarrow._parquet as _parquet
37
38 from pyarrow._parquet import (ParquetReader, Statistics, # noqa
/usr/local/lib/python3.10/dist-packages/pyarrow/_parquet.pyx in init pyarrow._parquet()
ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility. Expected 88 from C header, got 72 from PyObject
### Steps to reproduce the bug
ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility. Expected 88 from C header, got 72 from PyObject
### Expected behavior
Resolve the binary incompatibility
### Environment info
Google Colab Note book
|
closed
|
2024-02-16T11:05:17Z
|
2024-02-17T04:26:34Z
|
https://github.com/huggingface/datasets/issues/6670
|
[] |
prashanth19bolukonda
| 2
|
tensorflow/tensor2tensor
|
deep-learning
| 1,548
|
Training Transformer with Tensor2Tensor Using Own Data
|
### Description
I am trying to train a Transformer network on my own data using Tensor2Tensor. I am adapting the [Cloud Poetry](https://cloud.google.com/blog/products/gcp/cloud-poetry-training-and-hyperparameter-tuning-custom-text-models-on-cloud-ml-engine) example to fit my own task, `kt_problem`, where I am mapping sequences of floats to sequences of floats instead of sentences to sentences.
...
### Environment information
OS: Ubuntu 18.04.2 LTS
$ pip freeze | grep tensor
```
mesh-tensorflow==0.0.5
-e git+https://github.com/tensorflow/tensor2tensor.git@abf268a1d353d75d257e14e1a73dcea112337559#egg=tensor2tensor
tensorboard==1.13.1
tensorflow==1.13.1
tensorflow-datasets==1.0.1
tensorflow-estimator==1.13.0
tensorflow-metadata==0.13.0
tensorflow-probability==0.6.0
```
$ python -V
```
Python 3.7.3
```
### For bugs: reproduction and error logs
I have adapted the `generate_data()` and `generate_samples()` functions according to the scattered specifications for using one's own data with tensor2tensor (e.g. the data generation [README](https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/data_generators/README.md), line [174 of the `Problem` class](https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/data_generators/problem.py), etc.). They are as follows:
```
def generate_samples(self, data_dir, tmp_dir, train):
import numpy as np
features = pd.read_csv("data/kt/features.csv", dtype=np.float64)
targets = pd.read_csv("data/kt/targets.csv", dtype=np.float64)
for i in range(len(features)-1):
yield {
"inputs": list(features.iloc[i]),
"targets": list(targets.iloc[i])
}
def generate_data(self, data_dir, tmp_dir, task_id=-1):
generator_utils.generate_dataset_and_shuffle(
self.generate_samples(data_dir,tmp_dir,1),
self.training_filepaths(data_dir,4,False),
self.generate_samples(data_dir,tmp_dir,0),
self.dev_filepaths(data_dir,3,False))
```
These are defined within my class KTProblem.
After making this change, I can successfully run
```
PROBLEM='kt_problem' #my own problem, for which I've defined a class
%%bash
DATA_DIR=./t2t_data
TMP_DIR=$DATA_DIR/tmp
t2t-datagen \
--t2t_usr_dir=./kt/trainer \
--problem=$PROBLEM \
--data_dir=$DATA_DIR \
--tmp_dir=$TMP_DIR
```
and it generates a bunch of train and dev files. But when I try to train a transformer on it with this code,
```
%%bash
DATA_DIR=./t2t_data
OUTDIR=./trained_model
t2t-trainer \
--data_dir=$DATA_DIR \
--t2t_usr_dir=./kt/trainer \
--problem=$PROBLEM \
--model=transformer \
--hparams_set=transformer_kt \
--output_dir=$OUTDIR --job-dir=$OUTDIR --train_steps=10
```
it throws the following error:
```
```
As you can see in `generate_samples()`, the data generated is `np.float64`, and so I'm sure my inputs shouldn't be `int32`. The stack trace (posted right below) is super long, and I've been going through every line listed and checking the type of the inputs to see where this `int32` input came into the picture, but I can't find it. I want to know (1) why, if my inputs are floats, why/how/where they're becoming floats, but mostly (2) generally, how does one debug code like this? My approach thus far has been putting print statements right before every line in the stack trace, but that seems such a naive way to debug. Would it be better to use VScode, or what is the lesson I need to learn here when a library `tensor2tensor`, in this case, is not behaving as I think it ought to, but I don't want to come to know intimately what every function in the stack trace is doing?
Stack Trace:
```
INFO:tensorflow:Importing user module trainer from path /home/crytting/kt/kt
WARNING:tensorflow:From /home/crytting/kt/tensor2tensor/tensor2tensor/utils/trainer_lib.py:240: RunConfig.__init__ (from tensorflow.contrib.learn.python.learn.estimators.run_config) is deprecated and will be removed in a future version.
Instructions for updating:
When switching to tf.estimator.Estimator, use tf.estimator.RunConfig instead.
INFO:tensorflow:Configuring DataParallelism to replicate the model.
INFO:tensorflow:schedule=continuous_train_and_eval
INFO:tensorflow:worker_gpu=1
INFO:tensorflow:sync=False
WARNING:tensorflow:Schedule=continuous_train_and_eval. Assuming that training is running on a single machine.
INFO:tensorflow:datashard_devices: ['gpu:0']
INFO:tensorflow:caching_devices: None
INFO:tensorflow:ps_devices: ['gpu:0']
INFO:tensorflow:Using config: {'_task_type': None, '_task_id': 0, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f04151caba8>, '_master': '', '_num_ps_replicas': 0, '_num_worker_replicas': 0, '_environment': 'local', '_is_chief': True, '_evaluation_master': '', '_train_distribute': None, '_eval_distribute': None, '_device_fn': None, '_tf_config': gpu_options {
per_process_gpu_memory_fraction: 1.0
}
, '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_secs': None, '_log_step_count_steps': 100, '_protocol': None, '_session_config': gpu_options {
per_process_gpu_memory_fraction: 0.95
}
allow_soft_placement: true
graph_options {
optimizer_options {
global_jit_level: OFF
}
}
isolate_session_state: true
, '_save_checkpoints_steps': 1000, '_keep_checkpoint_max': 20, '_keep_checkpoint_every_n_hours': 10000, '_model_dir': './trained_model', 'use_tpu': False, 't2t_device_info': {'num_async_replicas': 1}, 'data_parallelism': <tensor2tensor.utils.expert_utils.Parallelism object at 0x7f0464512dd8>}
WARNING:tensorflow:Estimator's model_fn (<function T2TModel.make_estimator_model_fn.<locals>.wrapping_model_fn at 0x7f0414891e18>) includes params argument, but params are not passed to Estimator.
WARNING:tensorflow:ValidationMonitor only works with --schedule=train_and_evaluate
INFO:tensorflow:Not using Distribute Coordinator.
INFO:tensorflow:Running training and evaluation locally (non-distributed).
INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps 1000 or save_checkpoints_secs None.
WARNING:tensorflow:From /home/crytting/anaconda3/envs/kt/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
INFO:tensorflow:Reading data files from ./t2t_data/kt_problem-train*
INFO:tensorflow:partition: 0 num_data_files: 4
WARNING:tensorflow:From /home/crytting/kt/tensor2tensor/tensor2tensor/utils/data_reader.py:275: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version.
Instructions for updating:
Use eager execution and:
`tf.data.TFRecordDataset(path)`
WARNING:tensorflow:From /home/crytting/kt/tensor2tensor/tensor2tensor/utils/data_reader.py:37: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
WARNING:tensorflow:Shapes are not fully defined. Assuming batch_size means tokens.
WARNING:tensorflow:From /home/crytting/kt/tensor2tensor/tensor2tensor/utils/data_reader.py:233: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Setting T2TModel mode to 'train'
INFO:tensorflow:Using variable initializer: uniform_unit_scaling
INFO:tensorflow:Building model body
WARNING:tensorflow:From /home/crytting/kt/tensor2tensor/tensor2tensor/models/transformer.py:156: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
Traceback (most recent call last):
File "/home/crytting/anaconda3/envs/kt/bin/t2t-trainer", line 33, in <module>
tf.app.run()
File "/home/crytting/anaconda3/envs/kt/lib/python3.7/site-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "/home/crytting/anaconda3/envs/kt/bin/t2t-trainer", line 28, in main
t2t_trainer.main(argv)
File "/home/crytting/kt/tensor2tensor/tensor2tensor/bin/t2t_trainer.py", line 400, in main
execute_schedule(exp)
File "/home/crytting/kt/tensor2tensor/tensor2tensor/bin/t2t_trainer.py", line 356, in execute_schedule
getattr(exp, FLAGS.schedule)()
File "/home/crytting/kt/tensor2tensor/tensor2tensor/utils/trainer_lib.py", line 400, in continuous_train_and_eval
self._eval_spec)
File "/home/crytting/anaconda3/envs/kt/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/training.py", line 471, in train_and_evaluate
return executor.run()
File "/home/crytting/anaconda3/envs/kt/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/training.py", line 611, in run
return self.run_local()
File "/home/crytting/anaconda3/envs/kt/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/training.py", line 712, in run_local
saving_listeners=saving_listeners)
File "/home/crytting/anaconda3/envs/kt/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 358, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/home/crytting/anaconda3/envs/kt/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1124, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "/home/crytting/anaconda3/envs/kt/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1155, in _train_model_default
features, labels, model_fn_lib.ModeKeys.TRAIN, self.config)
File "/home/crytting/anaconda3/envs/kt/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1112, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "/home/crytting/kt/tensor2tensor/tensor2tensor/utils/t2t_model.py", line 1414, in wrapping_model_fn
use_tpu=use_tpu)
File "/home/crytting/kt/tensor2tensor/tensor2tensor/utils/t2t_model.py", line 1477, in estimator_model_fn
logits, losses_dict = model(features) # pylint: disable=not-callable
File "/home/crytting/anaconda3/envs/kt/lib/python3.7/site-packages/tensorflow/python/layers/base.py", line 530, in __call__
outputs = super(Layer, self).__call__(inputs, *args, **kwargs)
File "/home/crytting/anaconda3/envs/kt/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 554, in __call__
outputs = self.call(inputs, *args, **kwargs)
File "/home/crytting/kt/tensor2tensor/tensor2tensor/utils/t2t_model.py", line 323, in call
sharded_logits, losses = self.model_fn_sharded(sharded_features)
File "/home/crytting/kt/tensor2tensor/tensor2tensor/utils/t2t_model.py", line 400, in model_fn_sharded
sharded_logits, sharded_losses = dp(self.model_fn, datashard_to_features)
File "/home/crytting/kt/tensor2tensor/tensor2tensor/utils/expert_utils.py", line 231, in __call__
outputs.append(fns[i](*my_args[i], **my_kwargs[i]))
File "/home/crytting/kt/tensor2tensor/tensor2tensor/utils/t2t_model.py", line 428, in model_fn
body_out = self.body(transformed_features)
File "/home/crytting/kt/tensor2tensor/tensor2tensor/models/transformer.py", line 280, in body
**decode_kwargs
File "/home/crytting/kt/tensor2tensor/tensor2tensor/models/transformer.py", line 217, in decode
**kwargs)
File "/home/crytting/kt/tensor2tensor/tensor2tensor/models/transformer.py", line 156, in transformer_decode
1.0 - hparams.layer_prepostprocess_dropout)
File "/home/crytting/anaconda3/envs/kt/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/home/crytting/anaconda3/envs/kt/lib/python3.7/site-packages/tensorflow/python/ops/nn_ops.py", line 2979, in dropout
return dropout_v2(x, rate, noise_shape=noise_shape, seed=seed, name=name)
File "/home/crytting/anaconda3/envs/kt/lib/python3.7/site-packages/tensorflow/python/ops/nn_ops.py", line 3021, in dropout_v2
" be scaled. Got a %s tensor instead." % x.dtype)
ValueError: x has to be a floating point tensor since it's going to be scaled. Got a <dtype: 'int32'> tensor instead.
```
|
open
|
2019-04-18T00:45:34Z
|
2020-02-12T04:04:36Z
|
https://github.com/tensorflow/tensor2tensor/issues/1548
|
[] |
chrisrytting
| 3
|
aio-libs/aiomysql
|
sqlalchemy
| 46
|
'trans.rollback()' is not working correct
|
I've tried to use standart code from docs:
``` python
with (yield from engine) as conn:
trans = yield from conn.begin()
try:
yield from conn.execute("insert into x (a, b) values (1, 2)")
except Exception:
yield from trans.rollback()
else:
yield from trans.commit()
```
but in case if `trans.rollback()` was called for some reason then during closing the connection we will receive an exception:
```
File "/home/legat/Projects/ecoisme/env/lib/python3.5/site-packages/aiomysql/sa/engine.py", line 191, in __exit__
self._engine.release(self._conn)
File "/home/legat/Projects/ecoisme/env/lib/python3.5/site-packages/aiomysql/sa/engine.py", line 116, in release
raise InvalidRequestError("Cannot release a connection with "
aiomysql.sa.exc.InvalidRequestError: Cannot release a connection with not finished transaction
```
I think that `_rollback_impl()` method form SAConnection instance should be updated the same as it was at aiopg library [at this commit](https://github.com/aio-libs/aiopg/commit/1c208f4efbce81890334652f0bb949382340a02f)
A little bit later I will make the same/test and provide pull request.
|
closed
|
2015-11-26T22:55:24Z
|
2015-12-05T15:59:46Z
|
https://github.com/aio-libs/aiomysql/issues/46
|
[] |
ikhlestov
| 2
|
holoviz/panel
|
jupyter
| 7,449
|
Tabulator: incorrect data aggregation
|
#### ALL software version info
<details>
<summary>Software Version Info</summary>
```plaintext
panel 1.5.3
```
</details>
#### Description of expected behavior and the observed behavior
Observed behavior: Flat aggregators on Tabulator always give sums for all columns. Nested aggregators always return the first element in the group. Indexes displayed as NaNs. Data grouping works properly though.
Expected behaviors:
- aggregating correctly based on the specified aggregation methods
- indexes displayed appropriately.
#### Complete, minimal, self-contained example code that reproduces the issue
```python
import pandas as pd
import panel as pn
df = pd.DataFrame([
('Germany', 2020, 9, 2.4, 'A'),
('Germany', 2021, 3, 7.3, 'C'),
('Germany', 2022, 6, 3.1, 'B'),
('UK', 2020, 5, 8.0, 'A'),
('UK', 2021, 1, 3.9, 'B'),
('UK', 2022, 9, 2.2, 'A')
], columns=['Country', 'Year', 'Int', 'Float', 'Str']).set_index(['Country', 'Year'])
nested_aggregators = {'Year': {'Int': 'sum', 'Float': 'mean'}}
flat_aggregators = {'Int': 'sum', 'Float': 'mean'}
flat_aggs_tabulator = pn.widgets.Tabulator(
value=df, hierarchical=True, aggregators=flat_aggregators
)
nested_aggs_tabulator = pn.widgets.Tabulator(
value=df, hierarchical=True, aggregators=nested_aggregators
)
pn.Accordion(
("Flat Aggs", flat_aggs_tabulator),
("Nested Aggs", nested_aggs_tabulator)
).servable()
```
#### Screenshots or screencasts of the bug in action

|
open
|
2024-10-29T06:55:43Z
|
2025-01-20T19:18:33Z
|
https://github.com/holoviz/panel/issues/7449
|
[
"component: tabulator"
] |
thuydotm
| 0
|
babysor/MockingBird
|
pytorch
| 301
|
有人用MacBook跑过嘛0.0
|
<img width="1667" alt="截屏2021-12-29 下午11 19 16" src="https://user-images.githubusercontent.com/20741235/147676931-143df97f-223d-446f-8531-fc48d24b8844.png">
|
closed
|
2021-12-29T15:20:26Z
|
2022-03-07T15:43:41Z
|
https://github.com/babysor/MockingBird/issues/301
|
[] |
Noctise
| 3
|
deeppavlov/DeepPavlov
|
nlp
| 1,218
|
Warnings in basic_clasf_reader for valid and test files
|
An absence of valid-test files shouldn't be a reason for [warnings](https://github.com/deepmipt/DeepPavlov/blob/master/deeppavlov/dataset_readers/basic_classification_reader.py#L109)
because this is expected behavior when I pass only test.csv file and split data in data-iterator
It can be a `log.info` but even better, it should be exposed outside with explicit arguments
|
closed
|
2020-05-14T15:23:39Z
|
2020-05-27T15:09:46Z
|
https://github.com/deeppavlov/DeepPavlov/issues/1218
|
[] |
grayskripko
| 4
|
gradio-app/gradio
|
data-visualization
| 10,585
|
Concurrent issues still exist
|
> Concurrent issues still exist
_Originally posted by @SnowWindDancing in [#8034](https://github.com/gradio-app/gradio/issues/8034#issuecomment-2656019651)_
|
closed
|
2025-02-13T09:33:52Z
|
2025-02-18T01:15:14Z
|
https://github.com/gradio-app/gradio/issues/10585
|
[
"pending clarification"
] |
SnowWindDancing
| 2
|
dropbox/PyHive
|
sqlalchemy
| 251
|
execute hive query hangs
|
When use the pyhive to execute the SQL, the execution takes more than 25 minutes, causing a hang problem that prevents the program from proceeding to the next step, even though the hive job has been executed. I use dtruss to trace the python program about kernel calls,and found that the python side did not receive the socket-related packages.
-----------------------------------------------------
nomal kernel call:
SYSCALL(args) = return
recvfrom(0x5, 0x10274AC20, 0x6D) = 109 0
sendto(0x5, 0x102798518, 0x64) = 100 0
recvfrom(0x5, 0x102750C80, 0x4) = 4 0
recvfrom(0x5, 0x10278F5A0, 0x2A) = 42 0
sendto(0x5, 0x10272EA28, 0x57) = 87 0
recvfrom(0x5, 0x102750E38, 0x4) = 4 0
recvfrom(0x5, 0x10278F5F0, 0x28) = 40 0
close(0x5) = 0 0
-----------------------------------------------------
abnormal kernel call:
SYSCALL(args) = return
^C
Env:
external programs use iptables to access Hive in private networks.
|
open
|
2018-11-08T07:10:28Z
|
2020-10-13T10:39:42Z
|
https://github.com/dropbox/PyHive/issues/251
|
[] |
Junyewu
| 1
|
plotly/dash-table
|
dash
| 552
|
[FEATURE] Allow className for column
|
If a `className` could be specified to a column, and it would be applied to all `td` and/or `th` elements for that column, then this would allow using CSS to control the layout of such columns (width, right vs. left justification of values, etc.).
|
open
|
2019-08-20T08:38:00Z
|
2019-09-18T21:30:38Z
|
https://github.com/plotly/dash-table/issues/552
|
[] |
orenbenkiki
| 3
|
falconry/falcon
|
api
| 1,742
|
Multiprocessing pool map not working
|
Hi,
Anyone knows any trick how we can use multiprocessing pool map inside falcon app?
Fastapi am able to use pathos.multiprocessing ,the same code is not working inside falcon app
Any idea?
Fyi..it is not giving any error,just ending automatically
|
closed
|
2020-07-17T12:35:02Z
|
2020-07-23T12:28:42Z
|
https://github.com/falconry/falcon/issues/1742
|
[
"question"
] |
PriyatamNayak
| 10
|
plotly/dash
|
data-science
| 3,214
|
[BUG] dcc.Dropdown width rendering incorrect with Dash 3 rc4
|
**Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
- replace the result of `pip list | grep dash` below
```
dash 3.0.0rc4
dash-core-components 2.0.0
dash_design_kit 1.14.0
dash-html-components 2.0.0
dash-table 5.0.0
```
- if frontend related, tell us your Browser, Version and OS
- OS: [e.g. iOS] MacOS
- Browser [e.g. chrome, safari] Chrome & Safari
- Version [e.g. 22] `Version 134.0.6998.88` Arm64
**Describe the bug**
`dcc.Dropdown` renders squashed with Dash 3, whereas it renders at full-width with Dash 2.x
**Expected behavior**
The `dcc.Dropdown` should render the same way between Dash 2.x and Dash 3.x, if there have been no code changes in the app.
**Screenshots**
Dash 3.0

Dash 2.0

|
open
|
2025-03-13T14:07:28Z
|
2025-03-18T14:10:16Z
|
https://github.com/plotly/dash/issues/3214
|
[
"bug",
"P1",
"dash-3.0"
] |
susodapop
| 0
|
cvat-ai/cvat
|
tensorflow
| 9,133
|
How to quickly correct mislabeled points after auto-labeling?
|
### Actions before raising this issue
- [x] I searched the existing issues and did not find anything similar.
- [x] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Is your feature request related to a problem? Please describe.
After using auto-labeling, some points were inaccurately labeled. Since I only need one point annotation per image, is there a mode that allows directly clicking on the desired location to relocate the auto-generated point instead of dragging it with the mouse?
### Describe the solution you'd like
a mode that allows directly clicking on the desired location to relocate the auto-generated point instead of dragging it with the mouse
### Describe alternatives you've considered
Mouse drag
### Additional context
_No response_
|
closed
|
2025-02-21T03:32:45Z
|
2025-02-24T08:35:47Z
|
https://github.com/cvat-ai/cvat/issues/9133
|
[
"enhancement"
] |
Gungnir-xsh
| 1
|
gradio-app/gradio
|
data-science
| 10,309
|
Unsustained functionality of event 'show_progress'
|
### Describe the bug
In textbox.submit(), when I set show_progress='hidden' or 'minimal' to hide progressing animation (namely the spinner and time info), it works at the first input, but fail after that. This case is observed with other downstream gradio components such as TextArea, Chatbot, etc.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
def tmp(text):
time.sleep(1)
return [{'role':'user','content':text},
{'role':'assistant','content':'Hi'}]
with gr.Blocks() as demo:
box=gr.Textbox()
output=gr.Chatbot(type="messages",layout='bubble')
box.submit(fn=tmp,inputs=[box],outputs=output,show_progress='minimal')
demo.launch()
```
### Screenshot


### Logs
_No response_
### System Info
```shell
I am using gradio '5.5.0'
```
### Severity
I can work around it
|
closed
|
2025-01-08T00:14:56Z
|
2025-01-08T17:37:38Z
|
https://github.com/gradio-app/gradio/issues/10309
|
[
"bug",
"needs repro"
] |
ZhaoyuanQiu
| 2
|
d2l-ai/d2l-en
|
pytorch
| 1,715
|
sec 11.11.1 (learning rate schedule) has a null effect
|
In the pytorch code, the difference between a constant learning rate of 0.3 and square root schedluing is non existent, despite claims that the latter works better.
|
open
|
2021-04-11T01:29:03Z
|
2021-04-12T17:23:16Z
|
https://github.com/d2l-ai/d2l-en/issues/1715
|
[] |
murphyk
| 0
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
deep-learning
| 1,585
|
Training pix2pix with different input size
|
When I try to train the pix2pix model with the `--preprocess`, `--load_size` or `--crop_size` flag I run into an issue.
I try to train the pix2pix model on grayscale 400x400 images. For this I use those parameters: `'--model' 'pix2pix' '--direction' 'BtoA' '--input_nc' '1' '--output_nc' '1' '--load_size' '400' '--crop_size' '400'`.
I get a RuntimeError on line 536 of the networks.py script:
```
def forward(self, x):
if self.outermost:
return self.model(x)
else: # add skip connections
return torch.cat([x, self.model(x)], 1) <--- error
```
`RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 3 but got size 2 for tensor number 1 in the list.`
When evaluating self.model(x) I get a shape of `torch.Size([1, 512, 2, 2])` however, x has the shape `torch.Size([1, 512, 3, 3])`.
I tried different sizes, used even an odd numbers and made the load size bigger than the crop size. I also tried to use RGB images and get rid of the `input_nc` and `output_nc` flag. So far without success.
Does anyone have a hint?
|
open
|
2023-06-27T15:38:42Z
|
2024-06-05T07:18:53Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1585
|
[] |
MrMonk3y
| 3
|
modin-project/modin
|
pandas
| 7,111
|
Implement a remote function decorator with a cache
|
Currently, there is no a unified mechanism for putting functions into a distributed store and caching the remote references. There are multiple approaches to solving this issue, for example like here - https://github.com/modin-project/modin/blob/master/modin/core/execution/ray/implementations/pandas_on_ray/partitioning/virtual_partition.py#L64 .
All this functionality could be implemented in a single decorator, for example `@remote_function`, with an internal cache.
|
closed
|
2024-03-21T14:26:58Z
|
2024-04-03T09:17:39Z
|
https://github.com/modin-project/modin/issues/7111
|
[
"new feature/request 💬"
] |
AndreyPavlenko
| 0
|
pydantic/pydantic-core
|
pydantic
| 1,607
|
Issue while compiling pydantic-core for Chaquopy
|
Hi, I am trying to make pydantic-core to work with Chaquopy. I am progressing, but I face an issue I do not understand.
Maybe you have some clues on what I am facing.
I am adding the;logs of the error here
[log.zip](https://github.com/user-attachments/files/18562121/log.zip)
It looks like some API relative to pyDateTime are not found:
Compiling pydantic-core v2.27.2 (/Work/Inno/sandbox/pr/chaquopy/server/pypi/packages/pydantic-core/build/2.27.2/cp311-cp311-android_24_x86_64/src)
error[E0432]: unresolved imports `pyo3::types::PyDate`, `pyo3::types::PyDateTime`, `pyo3::types::PyDelta`, `pyo3::types::PyDeltaAccess`, `pyo3::types::PyTime`, `pyo3::types::PyTzInfo`
--> src/input/datetime.rs:6:19
|
6 | use pyo3::types::{PyDate, PyDateTime, PyDelta, PyDeltaAccess, PyDict, PyTime, PyTzInfo};
| ^^^^^^ ^^^^^^^^^^ ^^^^^^^ ^^^^^^^^^^^^^ ^^^^^^ ^^^^^^^^ no `PyTzInfo` in `types`
| | | | | |
| | | | | no `PyTime` in `types`
| | | | | help: a similar name exists in the module: `PyType`
| | | | no `PyDeltaAccess` in `types`
| | | no `PyDelta` in `types`
| | no `PyDateTime` in `types`
| no `PyDate` in `types`
Thanks a lot for your consideration.
|
open
|
2025-01-27T17:20:47Z
|
2025-01-28T09:17:37Z
|
https://github.com/pydantic/pydantic-core/issues/1607
|
[] |
FCare
| 4
|
fastapi/sqlmodel
|
fastapi
| 191
|
Flowing large hash values to Postgres BigInt
|
### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from sqlalchemy import BigInteger
from typing import Optional
from sqlmodel import Field, Session, SQLModel, create_engine
class Hero(SQLModel, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
#case A
normhash: Optional[int] = Field(default=None, index=True)
#case B
#normhash: Optional[BigInteger] = Field(default=None, index=True)
hero_1 = Hero(normhash=1559512409891417611)
engine = create_engine("postgresql://app:newbpw@somehost:5400/some_db)
SQLModel.metadata.create_all(engine)
with Session(engine) as session:
session.add(hero_1)
session.commit()
# in case A: DataError: (psycopg2.errors.NumericValueOutOfRange) integer out of range
# (case B the code won't even finish coming up - no validator error)
session.refresh(hero_1)
print(hero_1)
```
### Description
Using your default Hero example.
Replace the fields with a hash field.
Using postgres, I'm unable to set up the field for big integers. Case A: using standard int results in NumericValueOutOfRange at the psycopg2 level.
So, case B: trying to force a postgres BIGINT, using sqlalchemy BigInteger, I get:
File "pydantic/validators.py", line 715, in find_validators
RuntimeError: no validator found for <class 'sqlalchemy.sql.sqltypes.BigInteger'>, see `arbitrary_types_allowed` in Config
I know it involves all the different levels, but it seems like a model of use problem (and I had validator problems before that ended up being a change in the way I use sqlmodel.)
Thanks for your creation of sqlmodel - so far I've really enjoyed it along with fastapi!
### Operating System
Linux
### Operating System Details
_No response_
### SQLModel Version
0.0.4
### Python Version
3.8.12
### Additional Context
_No response_
|
open
|
2021-12-14T11:44:36Z
|
2025-01-09T17:54:42Z
|
https://github.com/fastapi/sqlmodel/issues/191
|
[
"question"
] |
cworkschris
| 10
|
StackStorm/st2
|
automation
| 5,061
|
"unknown file" reported as module when exceptions raised by sensor
|
## SUMMARY
Error messages generated by sensor result in "unknown file" instead of module names on python 3 systems, e.g.
`2020-10-15 08:22:43,653 139699041297216 ERROR (unknown file) [-] File "/opt/stackstorm/packs/openstack/sensors/messaging_sensor.py", line 52, in poll`
### STACKSTORM VERSION
3.3dev and 3.2.0
##### OS, environment, install method
Ubuntu 18.04, Single node install from ansible.
## Steps to reproduce the problem
One easy way to get an exception from sensor:
1. Install StackStorm
2. Install OpenStack pack - but don't configure it's config file
3. The openstack sensor will raise an exception as it can't locate the OS parametrs.
4. Examine the /var/log/st2/st2sensorcontainer.log
## Expected Results
Log the exception message in the test should not report "unknown file" for module.
## Actual Results
"unknown file" is reported on the ERROR lines, eg
```
2020-10-15 08:22:43,653 139699041297216 ERROR (unknown file) [-] File "/opt/stackstorm/packs/openstack/sensors/messaging_sensor.py", line 52, in poll
```
|
open
|
2020-10-15T20:06:48Z
|
2020-10-15T20:08:27Z
|
https://github.com/StackStorm/st2/issues/5061
|
[
"bug"
] |
amanda11
| 1
|
vitalik/django-ninja
|
django
| 1,225
|
Disable docs only for specific router
|
Hi how can i disable my docs only for specific router or endpoints?
I have a production ready site but i need to hide some specific endpoint by docs.
EDITED:
Sorry I'm so dumb i don't read about include_in_schema as endpoint parameter
|
closed
|
2024-07-11T10:26:19Z
|
2024-07-11T10:29:24Z
|
https://github.com/vitalik/django-ninja/issues/1225
|
[] |
rh363
| 0
|
robinhood/faust
|
asyncio
| 116
|
Recursion maximum depth reached error resulting in app crash
|
In trebuchet I found the following stack trace which had made the app crash:
```
[2018-07-06 22:18:51,007: ERROR]: [^-App]: Crashed reason=RecursionError('maximum recursion depth exceeded in comparison',)
Traceback (most recent call last):
File "/home/robinhood/env/lib/python3.6/site-packages/faust/app/base.py", line 845, in _on_partitions_revoked
await self.consumer.wait_empty()
File "/home/robinhood/env/lib/python3.6/site-packages/mode/services.py", line 417, in _and_transition
return await fun(self, *args, **kwargs)
File "/home/robinhood/env/lib/python3.6/site-packages/faust/transport/consumer.py", line 305, in wait_empty
await self.commit()
File "/home/robinhood/env/lib/python3.6/site-packages/faust/transport/consumer.py", line 345, in commit
return await self.force_commit(topics)
File "/home/robinhood/env/lib/python3.6/site-packages/mode/services.py", line 417, in _and_transition
return await fun(self, *args, **kwargs)
File "/home/robinhood/env/lib/python3.6/site-packages/faust/transport/consumer.py", line 374, in force_commit
did_commit = await self._commit_tps(commit_tps)
File "/home/robinhood/env/lib/python3.6/site-packages/faust/transport/consumer.py", line 384, in _commit_tps
await self._handle_attached(commit_offsets)
File "/home/robinhood/env/lib/python3.6/site-packages/faust/transport/consumer.py", line 408, in _handle_attached
pending = await attachments.publish_for_tp_offset(tp, offset)
File "/home/robinhood/env/lib/python3.6/site-packages/faust/app/_attached.py", line 140, in publish_for_tp_offset
for fut in attached
File "/home/robinhood/env/lib/python3.6/site-packages/faust/app/_attached.py", line 140, in <listcomp>
for fut in attached
File "/home/robinhood/env/lib/python3.6/site-packages/faust/topics.py", line 303, in publish_message
topic, key, value, partition=message.partition)
File "/home/robinhood/env/lib/python3.6/site-packages/faust/transport/drivers/aiokafka.py", line 668, in send
topic, value, key=key, partition=partition))
File "/home/robinhood/env/lib/python3.6/site-packages/aiokafka/producer/producer.py", line 311, in send
timestamp_ms=timestamp_ms)
File "/home/robinhood/env/lib/python3.6/site-packages/aiokafka/producer/message_accumulator.py", line 257, in add_message
tp, key, value, timeout, timestamp_ms))
File "/home/robinhood/env/lib/python3.6/site-packages/aiokafka/producer/message_accumulator.py", line 257, in add_message
tp, key, value, timeout, timestamp_ms))
File "/home/robinhood/env/lib/python3.6/site-packages/aiokafka/producer/message_accumulator.py", line 257, in add_message
tp, key, value, timeout, timestamp_ms))
[Previous line repeated 934 more times]
File "/home/robinhood/env/lib/python3.6/site-packages/aiokafka/producer/message_accumulator.py", line 252, in add_message
yield from batch.wait_drain(timeout)
File "/home/robinhood/python-3.6.3/lib/python3.6/asyncio/tasks.py", line 301, in wait
if futures.isfuture(fs) or coroutines.iscoroutine(fs):
File "/home/robinhood/python-3.6.3/lib/python3.6/asyncio/coroutines.py", line 270, in iscoroutine
return isinstance(obj, _COROUTINE_TYPES)
File "/home/robinhood/env/lib/python3.6/abc.py", line 188, in __instancecheck__
subclass in cls._abc_negative_cache):
File "/home/robinhood/env/lib/python3.6/_weakrefset.py", line 75, in __contains__
return wr in self.data
RecursionError: maximum recursion depth exceeded in comparison
```
|
closed
|
2018-07-09T18:36:05Z
|
2018-07-31T14:39:16Z
|
https://github.com/robinhood/faust/issues/116
|
[] |
vineetgoel
| 1
|
Miserlou/Zappa
|
django
| 1,291
|
CSRF cookie not set
|
<!--- Provide a general summary of the issue in the Title above -->
## Expected Behavior
When running in a local environment, a csrf cookie is set by my Django view.
## Actual Behavior
However, after deploying to AWS Lambda, no csrf cookie is dropped.
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: zappa==0.45.1
* Python version: Python 3.6.2
* Django version: Django==1.11.7
|
closed
|
2017-12-12T16:17:21Z
|
2017-12-13T08:02:28Z
|
https://github.com/Miserlou/Zappa/issues/1291
|
[] |
LaundroMat
| 4
|
keras-team/autokeras
|
tensorflow
| 1,713
|
Sometimes fit function ends before max_trials is reached, similar for epochs
|
### Bug Description
Hello, I have issue with following script. Sometimes the fit function does not perform 100 trials, but ends after less. When it is restarted, then it continues with more trials. Similar goes for epochs. Sometimes epochs do not continue until 40, but end at 12, 22 or similar.
Is it by design or some error?
```
import numpy as np
import pandas as pd
import tensorflow as tf
import autokeras as ak
# Initialize the structured data classifier.
clf = ak.StructuredDataClassifier(
overwrite=False, max_trials=100
)
clf.fit("file.csv","gradient",validation_split=0.2,epochs=40)
# Evaluate the best model with testing data.
print(clf.evaluate("file.csv", "gradient"))
```
### Bug Reproduction
Code for reproducing the bug:
Data used by the code:
### Expected Behavior
<!---
If not so obvious to see the bug from the running results,
please briefly describe the expected behavior.
-->
### Setup Details
Include the details about the versions of:
- OS type and version:
- Python: 3.8
- autokeras: 1.0.18
- keras-tuner:
- scikit-learn:
- numpy:
- pandas:
- tensorflow:
### Additional context
<!---
If applicable, add any other context about the problem.
-->
|
open
|
2022-04-22T09:17:35Z
|
2022-04-22T20:20:35Z
|
https://github.com/keras-team/autokeras/issues/1713
|
[] |
jbrepogmailcom
| 1
|
adithya-s-k/marker-api
|
rest-api
| 21
|
KeyError: 'sdpa'
|
Hi! When I try to run the simple server I get this error:
> config.json: 100%|████████████████████████████████████████████████████████████████████| 811/811 [00:00<?, ?B/s]
> model.safetensors: 100%|████████████████████████████████████████████████████| 154M/154M [00:07<00:00, 20.4MB/s]
> Loaded detection model vikp/surya_det3 on device cpu with dtype torch.float32
> preprocessor_config.json: 100%|███████████████████████████████████████████████████████| 675/675 [00:00<?, ?B/s]
> config.json: 100%|████████████████████████████████████████████████████████████████| 1.32k/1.32k [00:00<?, ?B/s]
> model.safetensors: 100%|████████████████████████████████████████████████████| 154M/154M [00:07<00:00, 21.1MB/s]
> Loaded detection model vikp/surya_layout3 on device cpu with dtype torch.float32
> preprocessor_config.json: 100%|███████████████████████████████████████████████████████| 373/373 [00:00<?, ?B/s]
> config.json: 100%|████████████████████████████████████████████████████████████████| 5.04k/5.04k [00:00<?, ?B/s]
> model.safetensors: 100%|████████████████████████████████████████████████████| 550M/550M [00:29<00:00, 18.8MB/s]
> ERROR: Traceback (most recent call last):
> File "C:\Users\ppgbe\AppData\Local\Programs\Python\Python312\Lib\site-packages\starlette\routing.py", line 693, in lifespan
> async with self.lifespan_context(app) as maybe_state:
> ^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "C:\Users\ppgbe\AppData\Local\Programs\Python\Python312\Lib\contextlib.py", line 210, in __aenter__
> return await anext(self.gen)
> ^^^^^^^^^^^^^^^^^^^^^
> File "C:\Users\ppgbe\AppData\Local\Programs\Python\Python312\Lib\site-packages\gradio\routes.py", line 1650, in new_lifespan
> async with old_lifespan(
> ^^^^^^^^^^^^^
> File "C:\Users\ppgbe\AppData\Local\Programs\Python\Python312\Lib\contextlib.py", line 210, in __aenter__
> return await anext(self.gen)
> ^^^^^^^^^^^^^^^^^^^^^
> File "D:\ppgbe\Documents\GitHub\marker-api\server.py", line 39, in lifespan
> model_list = load_all_models()
> ^^^^^^^^^^^^^^^^^
> File "C:\Users\ppgbe\AppData\Local\Programs\Python\Python312\Lib\site-packages\marker\models.py", line 74, in load_all_models
> order = setup_order_model(device, dtype)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "C:\Users\ppgbe\AppData\Local\Programs\Python\Python312\Lib\site-packages\marker\models.py", line 61, in setup_order_model
> model = load_order_model()
> ^^^^^^^^^^^^^^^^^^
> File "C:\Users\ppgbe\AppData\Local\Programs\Python\Python312\Lib\site-packages\surya\model\ordering\model.py", line 27, in load_model
> model = OrderVisionEncoderDecoderModel.from_pretrained(checkpoint, config=config, torch_dtype=dtype)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "C:\Users\ppgbe\AppData\Local\Programs\Python\Python312\Lib\site-packages\transformers\models\vision_encoder_decoder\modeling_vision_encoder_decoder.py", line 376, in from_pretrained
> return super().from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "C:\Users\ppgbe\AppData\Local\Programs\Python\Python312\Lib\site-packages\transformers\modeling_utils.py", line 4097, in from_pretrained
> model = cls(config, *model_args, **model_kwargs)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "C:\Users\ppgbe\AppData\Local\Programs\Python\Python312\Lib\site-packages\transformers\models\vision_encoder_decoder\modeling_vision_encoder_decoder.py", line 199, in __init__
> decoder = AutoModelForCausalLM.from_config(config.decoder)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "C:\Users\ppgbe\AppData\Local\Programs\Python\Python312\Lib\site-packages\transformers\models\auto\auto_factory.py", line 440, in from_config
> return model_class._from_config(config, **kwargs)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "C:\Users\ppgbe\AppData\Local\Programs\Python\Python312\Lib\site-packages\transformers\modeling_utils.py", line 1544, in _from_config
> model = cls(config, **kwargs)
> ^^^^^^^^^^^^^^^^^^^^^
> File "C:\Users\ppgbe\AppData\Local\Programs\Python\Python312\Lib\site-packages\surya\model\ordering\decoder.py", line 495, in __init__
> self.model = MBartOrderDecoderWrapper(config)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "C:\Users\ppgbe\AppData\Local\Programs\Python\Python312\Lib\site-packages\surya\model\ordering\decoder.py", line 480, in __init__
> self.decoder = MBartOrderDecoder(config)
> ^^^^^^^^^^^^^^^^^^^^^^^^^
> File "C:\Users\ppgbe\AppData\Local\Programs\Python\Python312\Lib\site-packages\surya\model\ordering\decoder.py", line 294, in __init__
> self.layers = nn.ModuleList([MBartOrderDecoderLayer(config) for _ in range(config.decoder_layers)])
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "C:\Users\ppgbe\AppData\Local\Programs\Python\Python312\Lib\site-packages\surya\model\ordering\decoder.py", line 209, in __init__
> self.self_attn = MBART_ATTENTION_CLASSES[config._attn_implementation](
> ~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> KeyError: 'sdpa'
>
> ERROR: Application startup failed. Exiting.
|
open
|
2024-11-13T11:51:29Z
|
2025-01-04T12:06:16Z
|
https://github.com/adithya-s-k/marker-api/issues/21
|
[] |
pblack476
| 3
|
desec-io/desec-stack
|
rest-api
| 540
|
Add Record-Level API
|
An idea how to *add* records to an existing RR set could be like this:
```
POST desec/example.com/rrsets/www/A/records/
[
"1.2.3.4",
"4.3.2.1"
]
```
To delete *some* records,
```
DELETE desec/example.com/rrsets/www/A/records/
[
"1.2.3.4"
]
```
could be used.
Some clients may have trouble with DELETE request that contain payload (but that's actually allowed).
Another issue may be that this cannot be done within a larger transaction.
|
open
|
2021-05-27T13:42:13Z
|
2022-01-03T09:27:18Z
|
https://github.com/desec-io/desec-stack/issues/540
|
[
"enhancement",
"api",
"prio: medium"
] |
nils-wisiol
| 2
|
miguelgrinberg/Flask-SocketIO
|
flask
| 1,073
|
python app doest start after import from flask_socketio
|
i am using ubuntu 18.04 and trying to build a small chat website for a course using flask and js
the problem happens when i try to run python3 app.py instead of flask run as i read here, i did find that i need eventlet or gevent, gevent-websocket but after installing each at a time and try to run all i can see is that the command line blinking (hangs) and with gevent if i press ctrl+c i get this
> ^CKeyboardInterrupt
2019-10-01T15:20:30Z
Traceback (most recent call last):
File "application.py", line 88, in <module>
socketio.run(app)
File "/home/kraytos/pyve/project2_env/lib/python3.6/site-packages/flask_socketio/__init__.py", line 602, in run
self.wsgi_server.serve_forever()
File "/home/kraytos/pyve/project2_env/lib/python3.6/site-packages/gevent/baseserver.py", line 369, in serve_forever
self._stop_event.wait()
File "src/gevent/event.py", line 127, in gevent._event.Event.wait
File "src/gevent/_abstract_linkable.py", line 192, in gevent.__abstract_linkable.AbstractLinkable._wait
File "src/gevent/_abstract_linkable.py", line 165, in gevent.__abstract_linkable.AbstractLinkable._wait_core
File "src/gevent/_abstract_linkable.py", line 169, in gevent.__abstract_linkable.AbstractLinkable._wait_core
File "src/gevent/_greenlet_primitives.py", line 60, in gevent.__greenlet_primitives.SwitchOutGreenletWithLoop.switch
File "src/gevent/_greenlet_primitives.py", line 60, in gevent.__greenlet_primitives.SwitchOutGreenletWithLoop.switch
File "src/gevent/_greenlet_primitives.py", line 64, in gevent.__greenlet_primitives.SwitchOutGreenletWithLoop.switch
File "src/gevent/__greenlet_primitives.pxd", line 35, in gevent.__greenlet_primitives._greenlet_switch
KeyboardInterrupt
here is pip freeze from inside my environment variable
> Click==7.0
dnspython==1.16.0
Flask==1.1.1
Flask-Session==0.3.1
Flask-SocketIO==4.2.1
gevent==1.4.0
gevent-websocket==0.10.1
greenlet==0.4.15
itsdangerous==1.1.0
Jinja2==2.10.1
MarkupSafe==1.1.1
monotonic==1.5
python-engineio==3.9.3
python-socketio==4.3.1
six==1.12.0
Werkzeug==0.16.0
this hang has nothing to do with the python or javascript code as it still hangs even if i remove all code lines except the imports like " from flask_socketio import SocketIO, emit "
any way this is the app python code
```
import os
from datetime import datetime
from flask import Flask, session, render_template, redirect, request, url_for, flash, jsonify
from flask_session import Session
from flask_socketio import SocketIO, emit
app = Flask(__name__)
app.config["SECRET_KEY"] = 'secrettt'
app.config["SESSION_PERMANENT"] = False
app.config["SESSION_TYPE"] = "filesystem"
Session(app)
socketio = SocketIO(app)
data = {}
@app.route("/", methods=["GET", "POST"])
def index():
if request.method == 'POST':
if request.form.get('display_name'):
dis_name = request.form.get('display_name')
session['username'] = dis_name
return jsonify({"success": True, "dis_name": session['username']})
elif request.form.get('channel_name'):
ch_name = request.form.get('channel_name')
session['active_channel'] = ch_name
return jsonify({"success": True, "redirect": f"/channel/{session['active_channel']}"})
# return redirect(url_for('channel', ch_name=ch_name))
elif request.form.get('cr_name_name'):
if 'username' in session:
flash(f"you already have a display name '{session['username']}'", 'danger')
return render_template('index.html', ch_list=list(data.keys()), title="Flack Home page")
else:
dis_name = request.form.get('cr_name_name')
ch_list = list(data.keys())
if not dis_name in ch_list:
flash(f"you have successfully entered a name '{dis_name}' for chatting", 'success')
session['username'] = dis_name
return render_template('index.html', dis_name=dis_name, ch_list=ch_list, title="Flack Home page")
else:
flash(f"the name you have entered {dis_name} is the same as a channel name, choose another one", "danger")
return render_template('index.html', ch_list=ch_list, title="Flack Home page")
elif request.form.get('cr_ch_name'):
if 'username' in session:
ch_name = request.form.get('cr_ch_name')
ch_list = list(data.keys())
if not ch_name in ch_list:
data.update({f"{ch_name}": []})
return render_template('index.html', dis_name=session['username'], ch_list=list(data.keys()), title="Flack Home page")
else:
flash(f"the channel name you have entered {ch_name} already exists, choose another one", "danger")
return render_template('index.html', dis_name=session['username'], ch_list=ch_list, title="Flack Home page")
else:
flash(f"you have to get a display name first", "danger")
return render_template('index.html', ch_list=list(data.keys()), title="Flack Home page")
ch_list = list(data.keys())
return render_template('index.html', ch_list=ch_list, title="Flack Home page")
@app.route("/channel/<ch_name>")
def channel(ch_name):
if 'username' in session:
msgs = data.get(f"{ch_name}")
return render_template("channel.html", msgs=msgs, dis_name=session['username'])
flash(f"you dont have access to chat untill you create display name ...", "danger")
return redirect(url_for('index'))
if __name__ == '__main__':
socketio.run(app)
```
|
closed
|
2019-10-01T15:40:48Z
|
2019-12-15T17:43:25Z
|
https://github.com/miguelgrinberg/Flask-SocketIO/issues/1073
|
[
"question"
] |
engragy
| 3
|
modoboa/modoboa
|
django
| 3,158
|
[Feature request] Configure external smtp relay from within web gui
|
Some of have port 25 blocked by the ISP and the one of the ways to bypass that is to use an external smtp provider.
Currently the only way to set or change an external smtp relay is by editing configuration files. It would be much better if this could be done from within the web gui, especially since a web gui configuration change or plugin update could potentially wipe out any manual configuration file edits.
I saw this was discussed before and was wondering if there were any plans for this feature.
### Discussed in https://github.com/modoboa/modoboa/discussions/2662
<div type='discussions-op-text'>
<sup>Originally posted by **rajeevrrs** November 1, 2022</sup>
Hi,
1. How to use external SMTP (Amazon SES) for sending a mail. (How to config external SMTP in Modoboa Webmail or Modoboa Admin Panel)
2. How to install Roudcube for webmail client? </div>
|
closed
|
2024-01-10T04:35:38Z
|
2025-02-25T00:18:41Z
|
https://github.com/modoboa/modoboa/issues/3158
|
[
"feedback-needed",
"stale"
] |
aleczdr
| 3
|
sinaptik-ai/pandas-ai
|
data-visualization
| 579
|
StreamlitMiddleware does not work in SmartDatalake.
|
### 🐛 Describe the bug
I am currently using pandasai version 1.2 and streamlit version 1.26.0. Before this, I used pandasai version 0.8.0.
In version 0.8.0, I used the StreamlitMiddleware configuration with the following code:
``` python
pandas_ai = PandasAI(pandasai_llm,
middlewares=[StreamlitMiddleware()],
verbose=True,
non_default_prompts = customized_prompts,
custom_whitelisted_dependencies = ['pandas', 'numpy', 'matplotlib', 'streamlit']
)
```
StreamlitMiddleware worked fine and plt.show() executed correctly in streamlit.
When I upgraded to version 1.2, my code became:
``` python
smart_lake_config = {
"llm": pandasai_llm,
"verbose": True,
"enable_cache": False,
"custom_prompts": {
"generate_python_code": CustomGeneratePythonCodePrompt(),
"correct_error": CorrectErrorPrompt()
},
"custom_whitelisted_dependencies": ['pandas', 'numpy', 'matplotlib', 'streamlit'],
"middlewares": [StreamlitMiddleware()]
}
dl = SmartDatalake(dfs=dfs, config=smart_lake_config)
result = dl.chat(pandas_ai_prompt)
st.write(result)
```
Images could not be displayed properly. So, I declared in CustomGeneratePythonCodePrompt:
``` python
# TODO import all the dependencies required
{default_import}
you are under streamlit framework, so you should use st.pyplot(plt.gcf()) to display the chart.
...
```
This worked, but every time I plot, it plots two charts, first chat is incorrect(usually is empy), second is right, which is very confusing.
By the way, in my environment, enable_cache must be set to False to start the streamlit project.
|
closed
|
2023-09-20T10:20:33Z
|
2024-06-01T00:20:29Z
|
https://github.com/sinaptik-ai/pandas-ai/issues/579
|
[] |
mzx4ever
| 4
|
piccolo-orm/piccolo
|
fastapi
| 106
|
ForeignKeys should allow supporting different types
|
Right now foreign keys use the int type for values. It would be nice if another type could be specified like bigint or something else
|
closed
|
2021-06-05T07:16:17Z
|
2021-06-05T07:59:17Z
|
https://github.com/piccolo-orm/piccolo/issues/106
|
[] |
cheesycod
| 0
|
pydantic/FastUI
|
fastapi
| 169
|
Is it supports charts and complex application
|
Hi Team,
This framework is awesome.
1. I want to create charts like line, bar and maps using this framework. I checked in components section there is no related component.
2. I want to create multi page application is it supports
3. How can we redirect one page to another page
I know it still in development pace if any alternatives please let me know
Your suggestions are appreciated
|
closed
|
2024-01-30T15:40:49Z
|
2024-02-09T07:10:38Z
|
https://github.com/pydantic/FastUI/issues/169
|
[] |
prasadkumar0590
| 1
|
marshmallow-code/apispec
|
rest-api
| 827
|
callbacks in operation are not resolved to refs
|
Callbacks are currently not resolving refs in its body.
Take the following example:
```yaml
...
post:
callbacks:
onEvent:
http://localhost/callback:
post:
requestBody:
content:
application/json:
schema: FooSchema
```
It should resolve ```FooSchema``` but it doesn't.
|
closed
|
2023-03-05T19:39:51Z
|
2023-03-06T21:16:06Z
|
https://github.com/marshmallow-code/apispec/issues/827
|
[] |
codectl
| 5
|
jina-ai/clip-as-service
|
pytorch
| 38
|
ValueError: Could not find trained model in model_dir: /tmp/tmp_st5oe05
|
Has the service been started up correctly? Why is it using an temporary folder as I have already indicated a model_dir in params?
WARNINGs are shown as follows:
```
usage: app.py -model_dir /tmp/bert/chinese_L-12_H-768_A-12 -num_worker=1
ARG VALUE
__________________________________________________
max_batch_size = 256
max_seq_len = 25
model_dir = /tmp/bert/chinese_L-12_H-768_A-12
num_worker = 1
pooling_layer = -2
pooling_strategy = REDUCE_MEAN
port = 5555
WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmp_st5oe05
WARNING:tensorflow:Estimator's model_fn (<function model_fn_builder.<locals>.model_fn at 0x7f80e7184598>) includes params argument, but params are not passed to Estimator.
I:WORKER-2:[ser:run:227]:ready and listening
Process BertWorker-1:
Traceback (most recent call last):
File "/usr/lib64/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/home/xxx/workspace/github/bert-as-service/service/server.py", line 229, in run
for r in self.estimator.predict(input_fn, yield_single_examples=False):
File "/home/xxx/pyenv/ternary/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 488, in predict
self._model_dir))
ValueError: Could not find trained model in model_dir: /tmp/tmp_st5oe05.
```
|
closed
|
2018-11-22T06:03:58Z
|
2018-11-24T12:32:54Z
|
https://github.com/jina-ai/clip-as-service/issues/38
|
[] |
titicaca
| 9
|
docarray/docarray
|
fastapi
| 1,374
|
When does ElasticSearch7 support
|
When does ElasticSearch7 support
|
closed
|
2023-04-14T09:28:10Z
|
2023-04-25T06:26:05Z
|
https://github.com/docarray/docarray/issues/1374
|
[] |
yuanjie-ai
| 3
|
scrapy/scrapy
|
web-scraping
| 5,742
|
Scrapy Shell Always Raises RuntimeError but Works Fine
|
<!--
Thanks for taking an interest in Scrapy!
If you have a question that starts with "How to...", please see the Scrapy Community page: https://scrapy.org/community/.
The GitHub issue tracker's purpose is to deal with bug reports and feature requests for the project itself.
Keep in mind that by filing an issue, you are expected to comply with Scrapy's Code of Conduct, including treating everyone with respect: https://github.com/scrapy/scrapy/blob/master/CODE_OF_CONDUCT.md
The following is a suggested template to structure your issue, you can find more guidelines at https://doc.scrapy.org/en/latest/contributing.html#reporting-bugs
-->
### Description
Scrapy Shell always raises RuntimeError even though I request any URL, but results fine.
### Steps to Reproduce
Execute `scrapy shell 'https://any.url'`, return:
```2022-12-01 21:30:51 [scrapy.utils.log] INFO: Scrapy 2.7.1 started (bot: lightnovel)
2022-12-01 21:30:51 [scrapy.utils.log] INFO: Versions: lxml 4.9.1.0, libxml2 2.9.12, cssselect 1.1.0, parsel 1.7.0, w3lib 2.0.1, Twisted 22.10.0, Python 3.10.5 (tags/v3.10.5:f377153, Jun 6 2022, 16:14:13) [MSC v.1929 64 bit (AMD64)], pyOpenSSL 22.1.0 (OpenSSL 3.0.7 1 Nov 2022), cryptography 38.0.3, Platform Windows-10-10.0.19045-SP0
2022-12-01 21:30:51 [scrapy.crawler] INFO: Overridden settings:
{'AUTOTHROTTLE_ENABLED': True,
'AUTOTHROTTLE_START_DELAY': 1,
'AUTOTHROTTLE_TARGET_CONCURRENCY': 8.0,
'BOT_NAME': 'lightnovel',
'CONCURRENT_REQUESTS': 32,
'CONCURRENT_REQUESTS_PER_IP': 16,
'DUPEFILTER_CLASS': 'scrapy.dupefilters.BaseDupeFilter',
'LOGSTATS_INTERVAL': 0,
'LOG_LEVEL': 'INFO',
'NEWSPIDER_MODULE': 'lightnovel.spiders',
'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7',
'SPIDER_MODULES': ['lightnovel.spiders'],
'TWISTED_REACTOR': 'twisted.internet.asyncioreactor.AsyncioSelectorReactor',
'USER_AGENT': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 '
'(KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36 '
'Edg/107.0.1418.52 Scrapy/2.7.1 (+https://scrapy.org) '
'LightnovelSpider/3.0'}
2022-12-01 21:30:51 [scrapy.extensions.telnet] INFO: Telnet Password: 797751ad72cc1c35
2022-12-01 21:30:51 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.throttle.AutoThrottle']
2022-12-01 21:30:51 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'lightnovel.middlewares.LightnovelDownloaderMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2022-12-01 21:30:51 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2022-12-01 21:30:51 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2022-12-01 21:30:51 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2022-12-01 21:30:51 [scrapy.core.engine] INFO: Spider opened
2022-12-01 21:30:52 [default] INFO: Spider opened: default
2022-12-01 21:30:52 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.baidu.com> (referer: None)
Traceback (most recent call last):
File "C:\Python310\lib\site-packages\twisted\internet\defer.py", line 892, in _runCallbacks
current.result = callback( # type: ignore[misc]
File "C:\Python310\lib\site-packages\scrapy\utils\defer.py", line 285, in f
return deferred_from_coro(coro_f(*coro_args, **coro_kwargs))
File "C:\Python310\lib\site-packages\scrapy\utils\defer.py", line 272, in deferred_from_coro
event_loop = get_asyncio_event_loop_policy().get_event_loop()
File "C:\Python310\lib\asyncio\events.py", line 656, in get_event_loop
raise RuntimeError('There is no current event loop in thread %r.'
RuntimeError: There is no current event loop in thread 'Thread-1 (start)'.
2022-12-01 21:30:52 [py.warnings] WARNING: C:\Python310\lib\site-packages\twisted\internet\defer.py:892: RuntimeWarning: coroutine 'SpiderMiddlewareManager.scrape_response.<locals>.process_callback_output' was never awaited
current.result = callback( # type: ignore[misc]
[s] Available Scrapy objects:
[s] scrapy scrapy module (contains scrapy.Request, scrapy.Selector, etc)
[s] crawler <scrapy.crawler.Crawler object at 0x000002EFC594F6D0>
[s] item {}
[s] request <GET https://www.baidu.com>
[s] response <200 https://www.baidu.com>
[s] settings <scrapy.settings.Settings object at 0x000002EFC594F670>
[s] spider <DefaultSpider 'default' at 0x2efc5e09450>
[s] Useful shortcuts:
[s] fetch(url[, redirect=True]) Fetch URL and update local objects (by default, redirects are followed)
[s] fetch(req) Fetch a scrapy.Request and update local objects
[s] shelp() Shell help (print this help)
[s] view(response) View response in a browser
>>>
```
**Expected behavior:** It is not expected to show:
```
2022-12-01 21:30:52 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.baidu.com> (referer: None)
Traceback (most recent call last):
File "C:\Python310\lib\site-packages\twisted\internet\defer.py", line 892, in _runCallbacks
current.result = callback( # type: ignore[misc]
File "C:\Python310\lib\site-packages\scrapy\utils\defer.py", line 285, in f
return deferred_from_coro(coro_f(*coro_args, **coro_kwargs))
File "C:\Python310\lib\site-packages\scrapy\utils\defer.py", line 272, in deferred_from_coro
event_loop = get_asyncio_event_loop_policy().get_event_loop()
File "C:\Python310\lib\asyncio\events.py", line 656, in get_event_loop
raise RuntimeError('There is no current event loop in thread %r.'
RuntimeError: There is no current event loop in thread 'Thread-1 (start)'.
2022-12-01 21:30:52 [py.warnings] WARNING: C:\Python310\lib\site-packages\twisted\internet\defer.py:892: RuntimeWarning: coroutine 'SpiderMiddlewareManager.scrape_response.<locals>.process_callback_output' was never awaited
current.result = callback( # type: ignore[misc]
```
**Reproduces how often:** Always.
### Versions
Scrapy : 2.7.1
lxml : 4.9.1.0
libxml2 : 2.9.12
cssselect : 1.1.0
parsel : 1.7.0
w3lib : 2.0.1
Twisted : 22.10.0
Python : 3.10.5 (tags/v3.10.5:f377153, Jun 6 2022, 16:14:13) [MSC v.1929 64 bit (AMD64)]
pyOpenSSL : 22.1.0 (OpenSSL 3.0.7 1 Nov 2022)
cryptography : 38.0.3
Platform : Windows-10-10.0.19045-SP0
|
closed
|
2022-12-01T13:58:02Z
|
2022-12-01T15:10:32Z
|
https://github.com/scrapy/scrapy/issues/5742
|
[] |
CodingMoeButa
| 5
|
ray-project/ray
|
deep-learning
| 51,464
|
CI test windows://python/ray/serve/tests:test_proxy is flaky
|
CI test **windows://python/ray/serve/tests:test_proxy** is consistently_failing. Recent failures:
- https://buildkite.com/ray-project/postmerge/builds/8965#0195aa87-c07f-445d-9db6-96c5167fbcd8
- https://buildkite.com/ray-project/postmerge/builds/8965#0195aa03-5c50-4a37-b569-65a5282fdc15
DataCaseName-windows://python/ray/serve/tests:test_proxy-END
Managed by OSS Test Policy
|
closed
|
2025-03-18T20:15:02Z
|
2025-03-20T03:42:19Z
|
https://github.com/ray-project/ray/issues/51464
|
[
"bug",
"triage",
"serve",
"flaky-tracker",
"ray-test-bot",
"ci-test",
"weekly-release-blocker",
"stability"
] |
can-anyscale
| 14
|
ivy-llc/ivy
|
tensorflow
| 28,067
|
Fix Frontend Failing Test: paddle - creation.paddle.eye
|
To-do List: https://github.com/unifyai/ivy/issues/27500
|
closed
|
2024-01-27T10:59:47Z
|
2024-01-27T11:10:43Z
|
https://github.com/ivy-llc/ivy/issues/28067
|
[
"Sub Task"
] |
Sai-Suraj-27
| 2
|
pydata/xarray
|
numpy
| 9,767
|
Appending with to_zarr raises ValueError if append_dim length of existing data is not an integer multiple of chunk size
|
### What happened?
I have code that produces zarr data as output with configurable chunking. Recent builds have been raising unexpected `ValueErrors` about misaligned chunks, despite a) the chunk shaping being the same for both the new and existing data and b) calling `chunk()` _and_ ensuring `encoding['chunks']` is unset on append as suggested in the error message.
The error:
```
ValueError: Specified zarr chunks encoding['chunks']=(14, 500, 500) for variable named 'foo' would overlap multiple dask chunks ((14, 14), (180,), (360,)) on the region (slice(29, None, None), slice(None, None, None), slice(None, None, None)). Writing this array in parallel with dask could lead to corrupted data. Consider either rechunking using `chunk()`, deleting or modifying `encoding['chunks']`, or specify `safe_chunks=False`.
```
In the provided MCVE, this can be observed as provided. If the value(s) of `DAYS_PER_APPEND` or the first value of the `CHUNKING` tuple are edited to be integer multiples of each other the error is not raised. If you further edit to add an offset to the `create()` call for the first dataset such that it will not be an integer multiple of the chunk shape (ie, `create(DAYS_PER_APPEND + 1, start_dt, LATITUDE_RES)` with `CHUNKING = (14, 50, 50)`) the error will appear again, but NOT if this is done for the second dataset, leading me to conclude that the error is raised on the existing dataset being out of alignment with the chunk shape.
### What did you expect to happen?
I expect appending with `to_zarr` to complete without error regardless of the length of the append dimension in the existing data, provided the chunking of both are the same.
### Minimal Complete Verifiable Example
```Python
from datetime import datetime, timezone, timedelta
import numpy as np
import xarray as xr
import zarr
LATITUDE_RES = 180
DAYS_PER_APPEND = 31
CHUNKING = (14, 50, 50)
def create(count: int, start_date: datetime, lat_res: int):
times = []
for _ in range(count):
times.append(start_date.timestamp())
start_date += timedelta(days=1)
times = np.array(times)
lats = np.linspace(-90, 90, lat_res)
lons = np.linspace(-180, 180, lat_res * 2)
coords = {
'longitude': ('longitude', lons),
'latitude': ('latitude', lats),
'time': ('time', times)
}
ds = xr.Dataset(
data_vars={
'foo': (('time', 'latitude', 'longitude'), np.random.random((count, lat_res, lat_res * 2))),
'bar': (('time', 'latitude', 'longitude'), np.random.random((count, lat_res, lat_res * 2))),
'baz': (('time', 'latitude', 'longitude'), np.random.random((count, lat_res, lat_res * 2))),
},
coords=coords,
)
return ds, start_date
start_dt = datetime.now().replace(hour=0, minute=0, second=0, microsecond=0, tzinfo=timezone.utc)
first_ds, next_dt = create(DAYS_PER_APPEND, start_dt, LATITUDE_RES)
print('Created first dataset')
print(first_ds)
print()
for var in first_ds.data_vars:
first_ds[var] = first_ds[var].chunk(CHUNKING)
encoding = {vname: {'compressor': zarr.Blosc(cname='blosclz', clevel=9), 'chunks': CHUNKING} for vname in first_ds.data_vars}
print('Prepared first dataset')
print(first_ds)
print(f'Encodings: {encoding}')
print('Data variable attributes:')
for var in first_ds.data_vars:
print(f'\t - {var}: {first_ds[var].attrs}')
first_ds.to_zarr(
'/tmp/test.zarr',
mode=None,
append_dim=None,
encoding=encoding,
write_empty_chunks=False,
consolidated=True
)
second_ds, _ = create(DAYS_PER_APPEND, next_dt, LATITUDE_RES)
print('Created second dataset')
print(second_ds)
print()
for var in second_ds.data_vars:
second_ds[var] = second_ds[var].chunk(CHUNKING)
encoding = None
print('Prepared second dataset')
print(second_ds)
print(f'Encodings: {encoding}')
print('Data variable attributes:')
for var in second_ds.data_vars:
print(f'\t - {var}: {second_ds[var].attrs}')
second_ds.to_zarr(
'/tmp/test.zarr',
mode=None,
append_dim='time',
encoding=encoding,
write_empty_chunks=False,
consolidated=True
)
```
### MVCE confirmation
- [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
- [X] Complete example — the example is self-contained, including all data and the text of any traceback.
- [X] Verifiable example — the example copy & pastes into an IPython prompt or [Binder notebook](https://mybinder.org/v2/gh/pydata/xarray/main?urlpath=lab/tree/doc/examples/blank_template.ipynb), returning the result.
- [X] New issue — a search of GitHub Issues suggests this is not a duplicate.
- [X] Recent environment — the issue occurs with the latest version of xarray and its dependencies.
### Relevant log output
```Python
(base) root@926b103bd78a:/# python mcve.py
Created first dataset
<xarray.Dataset> Size: 48MB
Dimensions: (time: 31, latitude: 180, longitude: 360)
Coordinates:
* longitude (longitude) float64 3kB -180.0 -179.0 -178.0 ... 179.0 180.0
* latitude (latitude) float64 1kB -90.0 -88.99 -87.99 ... 87.99 88.99 90.0
* time (time) float64 248B 1.731e+09 1.731e+09 ... 1.734e+09 1.734e+09
Data variables:
foo (time, latitude, longitude) float64 16MB 0.06853 ... 0.1764
bar (time, latitude, longitude) float64 16MB 0.7759 ... 0.08998
baz (time, latitude, longitude) float64 16MB 0.7744 0.4205 ... 0.5165
//mcve.py:53: DeprecationWarning: Supplying chunks as dimension-order tuples is deprecated. It will raise an error in the future. Instead use a dict with dimension names as keys.
first_ds[var] = first_ds[var].chunk(CHUNKING)
//mcve.py:53: DeprecationWarning: Supplying chunks as dimension-order tuples is deprecated. It will raise an error in the future. Instead use a dict with dimension names as keys.
first_ds[var] = first_ds[var].chunk(CHUNKING)
Prepared first dataset
<xarray.Dataset> Size: 48MB
Dimensions: (time: 31, latitude: 180, longitude: 360)
Coordinates:
* longitude (longitude) float64 3kB -180.0 -179.0 -178.0 ... 179.0 180.0
* latitude (latitude) float64 1kB -90.0 -88.99 -87.99 ... 87.99 88.99 90.0
* time (time) float64 248B 1.731e+09 1.731e+09 ... 1.734e+09 1.734e+09
Data variables:
foo (time, latitude, longitude) float64 16MB dask.array<chunksize=(14, 50, 50), meta=np.ndarray>
bar (time, latitude, longitude) float64 16MB dask.array<chunksize=(14, 50, 50), meta=np.ndarray>
baz (time, latitude, longitude) float64 16MB dask.array<chunksize=(14, 50, 50), meta=np.ndarray>
Encodings: {'foo': {'compressor': Blosc(cname='blosclz', clevel=9, shuffle=SHUFFLE, blocksize=0), 'chunks': (14, 50, 50)}, 'bar': {'compressor': Blosc(cname='blosclz', clevel=9, shuffle=SHUFFLE, blocksize=0), 'chunks': (14, 50, 50)}, 'baz': {'compressor': Blosc(cname='blosclz', clevel=9, shuffle=SHUFFLE, blocksize=0), 'chunks': (14, 50, 50)}}
Data variable attributes:
- foo: {}
- bar: {}
- baz: {}
Created second dataset
<xarray.Dataset> Size: 48MB
Dimensions: (time: 31, latitude: 180, longitude: 360)
Coordinates:
* longitude (longitude) float64 3kB -180.0 -179.0 -178.0 ... 179.0 180.0
* latitude (latitude) float64 1kB -90.0 -88.99 -87.99 ... 87.99 88.99 90.0
* time (time) float64 248B 1.734e+09 1.734e+09 ... 1.736e+09 1.737e+09
Data variables:
foo (time, latitude, longitude) float64 16MB 0.3227 0.4895 ... 0.7738
bar (time, latitude, longitude) float64 16MB 0.7567 0.2322 ... 0.9079
baz (time, latitude, longitude) float64 16MB 0.4169 0.9223 ... 0.3972
//mcve.py:79: DeprecationWarning: Supplying chunks as dimension-order tuples is deprecated. It will raise an error in the future. Instead use a dict with dimension names as keys.
second_ds[var] = second_ds[var].chunk(CHUNKING)
Prepared second dataset
<xarray.Dataset> Size: 48MB
Dimensions: (time: 31, latitude: 180, longitude: 360)
Coordinates:
* longitude (longitude) float64 3kB -180.0 -179.0 -178.0 ... 179.0 180.0
* latitude (latitude) float64 1kB -90.0 -88.99 -87.99 ... 87.99 88.99 90.0
* time (time) float64 248B 1.734e+09 1.734e+09 ... 1.736e+09 1.737e+09
Data variables:
foo (time, latitude, longitude) float64 16MB dask.array<chunksize=(14, 50, 50), meta=np.ndarray>
bar (time, latitude, longitude) float64 16MB dask.array<chunksize=(14, 50, 50), meta=np.ndarray>
baz (time, latitude, longitude) float64 16MB dask.array<chunksize=(14, 50, 50), meta=np.ndarray>
Encodings: None
Data variable attributes:
- foo: {}
- bar: {}
- baz: {}
Traceback (most recent call last):
File "//mcve.py", line 90, in <module>
second_ds.to_zarr(
File "/opt/conda/lib/python3.11/site-packages/xarray/core/dataset.py", line 2595, in to_zarr
return to_zarr( # type: ignore[call-overload,misc]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/xarray/backends/api.py", line 2239, in to_zarr
dump_to_store(dataset, zstore, writer, encoding=encoding)
File "/opt/conda/lib/python3.11/site-packages/xarray/backends/api.py", line 1919, in dump_to_store
store.store(variables, attrs, check_encoding, writer, unlimited_dims=unlimited_dims)
File "/opt/conda/lib/python3.11/site-packages/xarray/backends/zarr.py", line 900, in store
self.set_variables(
File "/opt/conda/lib/python3.11/site-packages/xarray/backends/zarr.py", line 1024, in set_variables
encoding = extract_zarr_variable_encoding(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/xarray/backends/zarr.py", line 412, in extract_zarr_variable_encoding
chunks = _determine_zarr_chunks(
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/xarray/backends/zarr.py", line 288, in _determine_zarr_chunks
raise ValueError(base_error)
ValueError: Specified zarr chunks encoding['chunks']=(14, 50, 50) for variable named 'bar' would overlap multiple dask chunks ((14, 14, 3), (50, 50, 50, 30), (50, 50, 50, 50, 50, 50, 50, 10)) on the region (slice(31, None, None), slice(None, None, None), slice(None, None, None)). Writing this array in parallel with dask could lead to corrupted data. Consider either rechunking using `chunk()`, deleting or modifying `encoding['chunks']`, or specify `safe_chunks=False`.
(base) root@926b103bd78a:/#
```
### Anything else we need to know?
_No response_
### Environment
<details>
/opt/conda/lib/python3.11/site-packages/_distutils_hack/__init__.py:33: UserWarning: Setuptools is replacing distutils.
warnings.warn("Setuptools is replacing distutils.")
INSTALLED VERSIONS
------------------
commit: None
python: 3.11.10 | packaged by conda-forge | (main, Oct 16 2024, 01:27:36) [GCC 13.3.0]
python-bits: 64
OS: Linux
OS-release: 6.10.11-linuxkit
machine: x86_64
processor:
byteorder: little
LC_ALL: C.UTF-8
LANG: C.UTF-8
LOCALE: ('en_US', 'UTF-8')
libhdf5: 1.14.3
libnetcdf: 4.9.2
xarray: 2024.10.0
pandas: 2.2.3
numpy: 1.26.4
scipy: 1.11.4
netCDF4: 1.7.1
pydap: None
h5netcdf: 1.4.0
h5py: 3.12.1
zarr: 2.18.3
cftime: 1.6.4
nc_time_axis: None
iris: None
bottleneck: 1.4.2
dask: 2024.10.0
distributed: 2024.10.0
matplotlib: None
cartopy: None
seaborn: None
numbagg: None
fsspec: 2022.5.0
cupy: None
pint: None
sparse: None
flox: None
numpy_groupies: None
setuptools: 68.0.0
pip: 23.3
conda: 23.11.0
pytest: None
mypy: None
IPython: None
sphinx: None
</details>
|
closed
|
2024-11-11T17:44:44Z
|
2024-12-11T07:57:14Z
|
https://github.com/pydata/xarray/issues/9767
|
[
"topic-zarr",
"plan to close"
] |
RKuttruff
| 15
|
miguelgrinberg/Flask-SocketIO
|
flask
| 1,121
|
How to handle message from client ?
|
**Your question**
I was trying to get flask-socketIO working by checking if server connected to client and sending a simple message. The python console shows something like `TypeError: handle_message() takes 1 positional argument but 2 were given`. I don't know how to handle this. As we can see, the message is recieved(in logs, not printed from function), but it throws a new error and the continues server to run. Any insights on this ? Thanks.
Client uses HTML + Javascript SocketIO and Server uses Flask-SocketIO,
Workflow:
* Server is running.
* Client connects to server.
* Server responds to client "Connected to server successfully"
* Client responds to server "Connected to client successfully"
```bash
* Debugger is active!
* Debugger PIN: 238-995-173
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
127.0.0.1 - - [10/Dec/2019 15:12:25] "GET / HTTP/1.1" 200 -
f7ebed933faf4a0bb1b38377c9c3262b: Sending packet OPEN data {'sid': 'f7ebed933faf4a0bb1b38377c9c3262b', 'upgrades': [], 'pingTimeout': 60000, 'pingInterval': 25000}
emitting event "message" to f7ebed933faf4a0bb1b38377c9c3262b [/]
f7ebed933faf4a0bb1b38377c9c3262b: Sending packet MESSAGE data 2["message",{"data":"Connected to server successfully"}]
f7ebed933faf4a0bb1b38377c9c3262b: Sending packet MESSAGE data 0
127.0.0.1 - - [10/Dec/2019 15:12:25] "GET /socket.io/?EIO=3&transport=polling&t=MxlC3iO HTTP/1.1" 200 -
f7ebed933faf4a0bb1b38377c9c3262b: Received packet MESSAGE data 2["message","message",{"data":"Connected to client successfully"}]
received event "message" from f7ebed933faf4a0bb1b38377c9c3262b [/]
127.0.0.1 - - [10/Dec/2019 15:12:25] "POST /socket.io/?EIO=3&transport=polling&t=MxlC3kN&sid=f7ebed933faf4a0bb1b38377c9c3262b HTTP/1.1" 200 -
Exception in thread Thread-9:
Traceback (most recent call last):
File "C:\Python\Python36\lib\threading.py", line 916, in _bootstrap_inner
self.run()
File "C:\Python\Python36\lib\threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\HOME\code\MagikMoments\env\lib\site-packages\socketio\server.py", line 651, in _handle_event_internal
r = server._trigger_event(data[0], namespace, sid, *data[1:])
File "C:\Users\HOME\code\MagikMoments\env\lib\site-packages\socketio\server.py", line 680, in _trigger_event
return self.handlers[namespace][event](*args)
File "C:\Users\HOME\code\MagikMoments\env\lib\site-packages\flask_socketio\__init__.py", line 284, in _handler
*args)
File "C:\Users\HOME\code\MagikMoments\env\lib\site-packages\flask_socketio\__init__.py", line 698, in _handle_event
ret = handler(*args)
TypeError: handle_message() takes 1 positional argument but 2 were given
```
If I remove message argument, it shows
```bash
127.0.0.1 - - [10/Dec/2019 15:19:19] "GET / HTTP/1.1" 200 -
e5706b09d47349e38ae8225ed1d028e9: Sending packet OPEN data {'sid': 'e5706b09d47349e38ae8225ed1d028e9', 'upgrades': [], 'pingTimeout': 60000, 'pingInterval': 25000}
emitting event "message" to e5706b09d47349e38ae8225ed1d028e9 [/]
e5706b09d47349e38ae8225ed1d028e9: Sending packet MESSAGE data 2["message",{"data":"Connected to server successfully"}]
e5706b09d47349e38ae8225ed1d028e9: Sending packet MESSAGE data 0
127.0.0.1 - - [10/Dec/2019 15:19:19] "GET /socket.io/?EIO=3&transport=polling&t=MxlDej6 HTTP/1.1" 200 -
e5706b09d47349e38ae8225ed1d028e9: Received packet MESSAGE data 2["message","message",{"data":"Connected to client successfully"}]
received event "message" from e5706b09d47349e38ae8225ed1d028e9 [/]
127.0.0.1 - - [10/Dec/2019 15:19:19] "POST /socket.io/?EIO=3&transport=polling&t=MxlDek8&sid=e5706b09d47349e38ae8225ed1d028e9 HTTP/1.1" 200 -
Exception in thread Thread-9:
Traceback (most recent call last):
File "C:\Python\Python36\lib\threading.py", line 916, in _bootstrap_inner
self.run()
File "C:\Python\Python36\lib\threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\HOME\code\MagikMoments\env\lib\site-packages\socketio\server.py", line 651, in _handle_event_internal
r = server._trigger_event(data[0], namespace, sid, *data[1:])
File "C:\Users\HOME\code\MagikMoments\env\lib\site-packages\socketio\server.py", line 680, in _trigger_event
return self.handlers[namespace][event](*args)
File "C:\Users\HOME\code\MagikMoments\env\lib\site-packages\flask_socketio\__init__.py", line 284, in _handler
*args)
File "C:\Users\HOME\code\MagikMoments\env\lib\site-packages\flask_socketio\__init__.py", line 698, in _handle_event
ret = handler(*args)
TypeError: handle_message() takes 0 positional arguments but 2 were given
```
## The Code :
`server.py`
```python3
from flask import Flask, render_template, Response
from flask_socketio import SocketIO, emit
app = Flask(__name__)
socketio = SocketIO(app, logger=True, engineio_logger=True)
@app.route('/')
def index():
return render_template('index.html')
@socketio.on('connect')
def test_connect():
emit('message', {'data': 'Connected to server successfully'})
@socketio.on('message')
def handle_message(message): # Should it take an argument ?
print("Message recieved" + message)
if __name__ == '__main__':
socketio.run(app, debug=True)
```
<br/>
`index.html`
```html
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>Display Webcam Stream</title>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.4.1/jquery.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/socket.io/2.3.0/socket.io.js"></script>
</head>
<body>
<script>
$(document).ready(function () {
var video = document.querySelector("#videoElement");
var socket = io.connect('http://localhost:5000');
socket.on('connect', ()=> {
// alert("Connection established successfully");
socket.send('message', { data: "Connected to client successfully"});
})
socket.on('message', msg => {
alert("Message recieved = " + msg.data)
})
});
</script>
</body>
</html>
```
|
closed
|
2019-12-10T09:50:35Z
|
2019-12-10T11:13:02Z
|
https://github.com/miguelgrinberg/Flask-SocketIO/issues/1121
|
[] |
1ycx
| 2
|
assafelovic/gpt-researcher
|
automation
| 997
|
Error parsing dimension value 100%: invalid literal for int() with base 10: '100%'
|
**Describe the bug**
I'm getting the following error when running the app on latest master.
Error parsing dimension value 100%: invalid literal for int() with base 10: '100%'
**To Reproduce**
Steps to reproduce the behavior:
1. export RETRIEVER="tavily, arxiv, duckduckgo"
2. add "duckduckgo-search" to the default requirements.txt
3. install dependencies
4. run the app
**Expected behavior**
The errors in the console don't lead to anything crashing, but I'm guessing that potentially some information are not being ingested correctly.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: macos
**Additional context**
Not a big deal, let me know if you need more information
|
closed
|
2024-11-25T07:24:01Z
|
2024-11-26T03:25:27Z
|
https://github.com/assafelovic/gpt-researcher/issues/997
|
[] |
happysalada
| 2
|
mars-project/mars
|
pandas
| 2,504
|
Explain `enter_mode` in doc
|
<!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
`enter_mode` stuff is a bit confusing for new core developers, we need to add some docs and comments to explain what they are.
|
open
|
2021-10-09T08:22:06Z
|
2022-01-29T08:00:47Z
|
https://github.com/mars-project/mars/issues/2504
|
[
"type: docs"
] |
qinxuye
| 0
|
microsoft/nni
|
machine-learning
| 5,706
|
pruning_bert_glue example pruning error
|
**Describe the issue**:
**Environment**:
- NNI version: 3.0
- Training service (local|remote|pai|aml|etc): ubuntu deep learning ami
- Client OS: ubuntu
- Server OS (for remote mode only):
- Python version: 3.10.9
- PyTorch/TensorFlow version: Pytorch 2.1.0+cu121
- Is conda/virtualenv/venv used?: no
- Is running in Docker?: no
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**:
just run the pruning_bert_glue tutorial
from nni.compression.pruning import MovementPruner
from nni.compression.speedup import ModelSpeedup
from nni.compression.utils.external.external_replacer import TransformersAttentionReplacer
def pruning_attn():
Path('./output/bert_finetuned/').mkdir(parents=True, exist_ok=True)
model = build_finetuning_model(task_name, f'./output/bert_finetuned/{task_name}.bin')
trainer = prepare_traced_trainer(model, task_name)
evaluator = TransformersEvaluator(trainer)
config_list = [{
'op_types': ['Linear'],
'op_names_re': ['bert\.encoder\.layer\.[0-9]*\.attention\.*'],
'sparse_threshold': 0.1,
'granularity': [64, 64]
}]
pruner = MovementPruner(model, config_list, evaluator, warmup_step=9000, cooldown_begin_step=36000, regular_scale=10)
pruner.compress(None, 4)
pruner.unwrap_model()
masks = pruner.get_masks()
Path('./output/pruning/').mkdir(parents=True, exist_ok=True)
torch.save(masks, './output/pruning/attn_masks.pth')
torch.save(model, './output/pruning/attn_masked_model.pth')
if not skip_exec:
pruning_attn()
```
Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['classifier.bias', 'classifier.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
[/tmp/ipykernel_56471/3847997197.py:3](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a2243617073746f6e65227d.vscode-resource.vscode-cdn.net/tmp/ipykernel_56471/3847997197.py:3): FutureWarning: load_metric is deprecated and will be removed in the next major version of datasets. Use 'evaluate.load' instead, from the new library 🤗 Evaluate: https://huggingface.co/docs/evaluate
metric = load_metric('glue', task_name)
[2023-11-06 02:54:46] WARNING: trainer.optimzer is not wrapped by nni.trace, or trainer.optimzer is None, will using huggingface default optimizer.
[2023-11-06 02:54:46] WARNING: trainer.lr_scheduler is not wrapped by nni.trace, or trainer.lr_scheduler is None, will using huggingface default lr_scheduler.
[2023-11-06 02:54:46] WARNING: Using epochs number as training duration, please make sure the total training steps larger than `cooldown_begin_step`.
You are adding a <class 'nni.compression.utils.evaluator.PatchCallback'> to the callbacks of this Trainer, but there is already one. The currentlist of callbacks is
:DefaultFlowCallback
PrinterCallback
PatchCallback
[/home/ubuntu/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py:557](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a2243617073746f6e65227d.vscode-resource.vscode-cdn.net/home/ubuntu/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py:557): UserWarning: This DataLoader will create 12 worker processes in total. Our suggested max number of worker in current system is 4, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
[/home/ubuntu/NAS_exps/new_pruning_bert_glue.ipynb](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a2243617073746f6e65227d.vscode-resource.vscode-cdn.net/home/ubuntu/NAS_exps/new_pruning_bert_glue.ipynb) Cell 23 line 3
26 torch.save(model, '[./output/pruning/attn_masked_model.pth](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a2243617073746f6e65227d.vscode-resource.vscode-cdn.net/home/ubuntu/NAS_exps/output/pruning/attn_masked_model.pth)')
29 if not skip_exec:
---> 30 pruning_attn()
[/home/ubuntu/NAS_exps/new_pruning_bert_glue.ipynb](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a2243617073746f6e65227d.vscode-resource.vscode-cdn.net/home/ubuntu/NAS_exps/new_pruning_bert_glue.ipynb) Cell 23 line 2
12 config_list = [{
13 'op_types': ['Linear'],
14 'op_names_re': ['bert\.encoder\.layer\.[0-9]*\.attention\.*'],
15 'sparse_threshold': 0.1,
16 'granularity': [64, 64]
17 }]
19 pruner = MovementPruner(model, config_list, evaluator, warmup_step=9000, cooldown_begin_step=36000, regular_scale=10)
---> 20 pruner.compress(None, 4)
21 pruner.unwrap_model()
23 masks = pruner.get_masks()
File [~/.local/lib/python3.8/site-packages/nni/compression/pruning/movement_pruner.py:228](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a2243617073746f6e65227d.vscode-resource.vscode-cdn.net/home/ubuntu/NAS_exps/~/.local/lib/python3.8/site-packages/nni/compression/pruning/movement_pruner.py:228), in MovementPruner.compress(self, max_steps, max_epochs)
225 warn_msg = \
226 f'Using epochs number as training duration, please make sure the total training steps larger than `cooldown_begin_step`.'
227 _logger.warning(warn_msg)
--> 228 return super().compress(max_steps, max_epochs)
...
-> 1080 assert optimizer is not None
1081 old_step = optimizer.step
1083 def patched_step(_, *args, **kwargs):
AssertionError:
Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...
```
|
open
|
2023-11-06T03:30:30Z
|
2024-07-13T07:32:45Z
|
https://github.com/microsoft/nni/issues/5706
|
[] |
sukritij29
| 3
|
gradio-app/gradio
|
data-visualization
| 10,849
|
Wrap=True in gr.Dataframe not working in 5.21 & 5.22 release
|
### Describe the bug
`wrap=True` in `gr.Dataframe `
Wrapped as expected in version 5.20

Not wrapped in 5.21 and 5.22

### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
import pandas as pd
gr.Dataframe(
pd.Dataframe(
{"test": """Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum."""}),
wrap=True)
```
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
------------------------------
Operating System: Linux
gradio version: 5.22.0
gradio_client version: 1.8.0
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.9.0
audioop-lts is not installed.
fastapi: 0.115.11
ffmpy: 0.5.0
gradio-client==1.8.0 is not installed.
groovy: 0.1.2
httpx: 0.28.1
huggingface-hub: 0.29.3
jinja2: 3.1.6
markupsafe: 3.0.2
numpy: 2.2.4
orjson: 3.10.15
packaging: 24.2
pandas: 2.2.3
pillow: 11.1.0
pydantic: 2.10.6
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.2
ruff: 0.11.1
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.46.1
tomlkit: 0.13.2
typer: 0.15.2
typing-extensions: 4.12.2
urllib3: 2.3.0
uvicorn: 0.34.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2025.3.0
httpx: 0.28.1
huggingface-hub: 0.29.3
packaging: 24.2
typing-extensions: 4.12.2
websockets: 15.0.1
```
### Severity
Blocking usage of gradio
|
open
|
2025-03-21T02:29:48Z
|
2025-03-21T11:04:16Z
|
https://github.com/gradio-app/gradio/issues/10849
|
[
"bug"
] |
xxxpsyduck
| 1
|
MaxHalford/prince
|
scikit-learn
| 59
|
How to reconstruct/data recover the dataset after FADM transform? (inverse_transform like)
|
Hi! I've started using Prince package in order to apply a FADM in my binary and categorical dataset (btw, thanks @MaxHalford for this amazing library!!!).
Now I'm trying to find a way to reconstruct my original dataset (something like the inverse_transform function of sklearn PCA), so I could check which points are in each cluster. But I could not find a way to do this, neither a function that gives me the original indexes.
Any ideas?
Thankss!!
|
closed
|
2019-03-24T12:13:17Z
|
2023-02-27T11:49:00Z
|
https://github.com/MaxHalford/prince/issues/59
|
[
"enhancement",
"help wanted"
] |
pollyannagoncalves-hotmart
| 5
|
521xueweihan/HelloGitHub
|
python
| 2,068
|
line-io
|
## 项目推荐
- 项目地址:[line-io](https://github.com/wolray/line-io)
- 类别:Java
- 项目后续更新计划:更丰富友好的文档
- 项目描述:
- 必写:一个融合了CSV读写和stream增强的Java库,让io操作丝滑无比,堪比Python里的pandas甚至更强
- 可选:用于实际生产的Java工程,初学者通过阅读源码了解java的反射怎么玩得有趣又实用
- 详见项目首页中文说明
- 推荐理由:目前Java生态里最好用的csv阅读器,没有之一。性能可能不是最好但仅略逊于univocity。
- 示例代码:详见项目首页中文说明
提高项目收录的概率方法如下:
1. 到 HelloGitHub 网站首页:https://hellogithub.com 搜索要推荐的项目地址,查看准备推荐的项目是否被推荐过。
2. 根据 [项目审核标准说明](https://github.com/521xueweihan/HelloGitHub/issues/271) 修改项目
3. 如您推荐的项目收录到《HelloGitHub》月刊,您的 GitHub 帐号将展示在 [贡献人列表](https://github.com/521xueweihan/HelloGitHub/blob/master/content/contributors.md),**同时会在本 issues 中通知您**。
再次感谢您对 HelloGitHub 项目的支持!
|
closed
|
2022-01-09T16:15:42Z
|
2022-01-22T02:33:26Z
|
https://github.com/521xueweihan/HelloGitHub/issues/2068
|
[] |
wolray
| 1
|
huggingface/datasets
|
pandas
| 6,610
|
cast_column to Sequence(subfeatures_dict) has err
|
### Describe the bug
I am working with the following demo code:
```
from datasets import load_dataset
from datasets.features import Sequence, Value, ClassLabel, Features
ais_dataset = load_dataset("/data/ryan.gao/ais_dataset_cache/raw/1978/")
ais_dataset = ais_dataset["train"]
def add_class(example):
example["my_labeled_bbox"] = {"bbox": [100,100,200,200], "label": "cat"}
return example
ais_dataset = ais_dataset.map(add_class, batched=False, num_proc=32)
ais_dataset = ais_dataset.cast_column("my_labeled_bbox", Sequence(
{
"bbox": Sequence(Value(dtype="int64")),
"label": ClassLabel(names=["cat", "dog"])
}))
print(ais_dataset[0])
```
However, executing this code results in an error:
```
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py", line 2111, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
TypeError: Couldn't cast array of type
int64
to
Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)
```
Upon examining the source code in datasets/table.py at line 2035:
```
if isinstance(feature, Sequence) and isinstance(feature.feature, dict):
feature = {
name: Sequence(subfeature, length=feature.length) for name, subfeature in feature.feature.items()
}
```
I noticed that if subfeature is of type Sequence, the code results in Sequence(Sequence(...), ...) and Sequence(ClassLabel(...), ...), which appears to be the source of the error.
### Steps to reproduce the bug
run my demo code
### Expected behavior
no exception
### Environment info
python 3.9
datasets: 2.16.1
|
closed
|
2024-01-23T09:32:32Z
|
2024-01-25T02:15:23Z
|
https://github.com/huggingface/datasets/issues/6610
|
[] |
neiblegy
| 2
|
pyro-ppl/numpyro
|
numpy
| 1,137
|
Prior dependent on multiple parameters
|
Hi! Just wondering if there is a way in `numpyro` to implement the model described here: https://discourse.pymc.io/t/constraint-or-prior-dependent-on-multiple-parameters/3934
Or would you suggest doing something different altogether? Thanks!
|
closed
|
2021-08-27T09:56:58Z
|
2021-08-27T10:00:38Z
|
https://github.com/pyro-ppl/numpyro/issues/1137
|
[] |
tcuongd
| 1
|
docarray/docarray
|
pydantic
| 1,255
|
chore(v2): code quality
|
Let's add isort to our codebase which would check imports being alphabetically sorted.
Also, add a functionality to easily format the code while working on it, something like `make format` (calling black and isort), `make black`, `make lint`, etc. Let's also create testing/development dependencies file consisting of above-mentioned libraries, and document the usage in CONTRIBUTING.md
|
closed
|
2023-03-20T10:05:59Z
|
2023-03-20T10:49:31Z
|
https://github.com/docarray/docarray/issues/1255
|
[] |
jupyterjazz
| 8
|
AutoGPTQ/AutoGPTQ
|
nlp
| 709
|
[FEATURE] Enhance pack model speed
|
**Is your feature request related to a problem? Please describe.**
when I quant [Qwen1.5-4B](https://huggingface.co/Qwen/Qwen1.5-4B) packing model take much time for some layer
```python
os.environ['OMP_NUM_THREADS'] = '12'
os.environ['OPENBLAS_NUM_THREADS'] = '12'
os.environ['MKL_NUM_THREADS'] = '12'
os.environ['VECLIB_MAXIMUM_THREADS'] = '12'
os.environ['NUMEXPR_NUM_THREADS'] = '12'
quantize_config = BaseQuantizeConfig(
bits=8, # quantize model to 4-bit
group_size=128, # it is recommended to set the value to 128
desc_act=False, # set to False can significantly speed up inference but the perplexity may slightly bad
damp_percent=0.1,
)
```
**Describe alternatives you've considered**
1. Perhapes use multiprocess/multithreading to packing for multi-layers or smaller unit to parallel.
2. packing in gpu
**Additional context**
log:
```log
INFO 2024-07-11 20:13:23.916 [3008655-MainThread] (<module>@test-small-quantize_gptq-int8.py:066) start quantize
....
INFO 2024-07-11 21:55:27.114 [3008655-MainThread] (pack_model@_utils.py:283) Packing model...
Packing model.layers.0.self_attn.k_proj...: 1%| | 2/280 [00:00<00:52, 5.29it/s]
Packing model.layers.0.self_attn.v_proj...: 1%| | 2/280 [00:00<00:52, 5.29it/s]
Packing model.layers.0.self_attn.v_proj...: 1%| | 3/280 [00:00<00:51, 5.36it/s]
Packing model.layers.0.self_attn.o_proj...: 1%| | 3/280 [00:00<00:51, 5.36it/s]
Packing model.layers.0.self_attn.o_proj...: 1%|▏ | 4/280 [00:00<00:51, 5.38it/s]
Packing model.layers.0.mlp.gate_proj...: 1%|▏ | 4/280 [00:00<00:51, 5.38it/s]
Packing model.layers.0.mlp.gate_proj...: 2%|▏ | 5/280 [00:20<33:30, 7.31s/it]
Packing model.layers.0.mlp.up_proj...: 2%|▏ | 5/280 [00:20<33:30, 7.31s/it]
Packing model.layers.0.mlp.up_proj...: 2%|▏ | 6/280 [00:21<23:46, 5.21s/it]
Packing model.layers.0.mlp.down_proj...: 2%|▏ | 6/280 [00:21<23:46, 5.21s/it]
Packing model.layers.0.mlp.down_proj...: 2%|▎ | 7/280 [00:22<16:56, 3.72s/it]
Packing model.layers.1.self_attn.q_proj...: 2%|▎ | 7/280 [00:22<16:56, 3.72s/it]
Packing model.layers.1.self_attn.q_proj...: 3%|▎ | 8/280 [00:22<11:44, 2.59s/it]
Packing model.layers.1.self_attn.k_proj...: 3%|▎ | 8/280 [00:22<11:44, 2.59s/it]
Packing model.layers.1.self_attn.k_proj...: 3%|▎ | 9/280 [00:22<08:16, 1.83s/it]
Packing model.layers.1.self_attn.v_proj...: 3%|▎ | 9/280 [00:22<08:16, 1.83s/it]
Packing model.layers.1.self_attn.v_proj...: 4%|▎ | 10/280 [00:22<05:57, 1.32s/it]
Packing model.layers.1.self_attn.o_proj...: 4%|▎ | 10/280 [00:22<05:57, 1.32s/it]
Packing model.layers.1.self_attn.o_proj...: 4%|▍ | 11/280 [00:23<04:22, 1.03it/s]
Packing model.layers.1.mlp.gate_proj...: 4%|▍ | 11/280 [00:23<04:22, 1.03it/s]
Packing model.layers.1.mlp.gate_proj...: 4%|▍ | 12/280 [02:29<2:54:21, 39.03s/it]
Packing model.layers.1.mlp.up_proj...: 4%|▍ | 12/280 [02:29<2:54:21, 39.03s/it]
Packing model.layers.1.mlp.up_proj...: 5%|▍ | 13/280 [04:20<4:30:36, 60.81s/it]
Packing model.layers.1.mlp.down_proj...: 5%|▍ | 13/280 [04:20<4:30:36, 60.81s/it]
Packing model.layers.1.mlp.down_proj...: 5%|▌ | 14/280 [55:53<72:10:41, 976.85s/it]
Packing model.layers.2.self_attn.q_proj...: 5%|▌ | 14/280 [55:53<72:10:41, 976.85s/it]
Packing model.layers.2.self_attn.q_proj...: 5%|▌ | 15/280 [55:53<50:14:10, 682.45s/it]
Packing model.layers.2.self_attn.k_proj...: 5%|▌ | 15/280 [55:53<50:14:10, 682.45s/it]
Packing model.layers.2.self_attn.k_proj...: 6%|▌ | 16/280 [55:54<34:59:12, 477.09s/it]
Packing model.layers.2.self_attn.v_proj...: 6%|▌ | 16/280 [55:54<34:59:12, 477.09s/it]
Packing model.layers.2.self_attn.v_proj...: 6%|▌ | 17/280 [55:54<24:22:40, 333.69s/it]
Packing model.layers.2.self_attn.o_proj...: 6%|▌ | 17/280 [55:54<24:22:40, 333.69s/it]
Packing model.layers.2.self_attn.o_proj...: 6%|▋ | 18/280 [55:54<16:59:32, 233.48s/it]
Packing model.layers.2.mlp.gate_proj...: 6%|▋ | 18/280 [55:54<16:59:32, 233.48s/it]
Packing model.layers.2.mlp.gate_proj...: 7%|▋ | 19/280 [1:06:47<26:03:51, 359.51s/it]
Packing model.layers.2.mlp.up_proj...: 7%|▋ | 19/280 [1:06:47<26:03:51, 359.51s/it]
Packing model.layers.2.mlp.up_proj...: 7%|▋ | 20/280 [1:21:50<37:45:07, 522.72s/it]
Packing model.layers.2.mlp.down_proj...: 7%|▋ | 20/280 [1:21:50<37:45:07, 522.72s/it]
Packing model.layers.2.mlp.down_proj...: 8%|▊ | 21/280 [1:27:25<33:32:40, 466.26s/it]
Packing model.layers.3.self_attn.q_proj...: 8%|▊ | 21/280 [1:27:25<33:32:40, 466.26s/it]
Packing model.layers.3.self_attn.q_proj...: 8%|▊ | 22/280 [1:27:25<23:23:27, 326.39s/it]
Packing model.layers.3.self_attn.k_proj...: 8%|▊ | 22/280 [1:27:25<23:23:27, 326.39s/it]
Packing model.layers.3.self_attn.k_proj...: 8%|▊ | 23/280 [1:27:25<16:18:44, 228.50s/it]
Packing model.layers.3.self_attn.v_proj...: 8%|▊ | 23/280 [1:27:25<16:18:44, 228.50s/it]
Packing model.layers.3.self_attn.v_proj...: 9%|▊ | 24/280 [1:27:25<11:22:37, 159.99s/it]
Packing model.layers.3.self_attn.o_proj...: 9%|▊ | 24/280 [1:27:25<11:22:37, 159.99s/it]
Packing model.layers.3.self_attn.o_proj...: 9%|▉ | 25/280 [1:27:26<7:56:11, 112.04s/it]
Packing model.layers.3.mlp.gate_proj...: 9%|▉ | 25/280 [1:27:26<7:56:11, 112.04s/it]
```
|
open
|
2024-07-12T09:52:34Z
|
2024-07-13T05:50:07Z
|
https://github.com/AutoGPTQ/AutoGPTQ/issues/709
|
[
"enhancement"
] |
DeJoker
| 1
|
open-mmlab/mmdetection
|
pytorch
| 12,282
|
什么时候进行下一次更新
|
距离上一次更新已经有8个月了,下一次更新是什么时候
|
open
|
2025-01-03T09:02:33Z
|
2025-01-03T09:02:48Z
|
https://github.com/open-mmlab/mmdetection/issues/12282
|
[] |
yiriyi
| 0
|
betodealmeida/shillelagh
|
sqlalchemy
| 24
|
Use `adapter_kwargs` instead (or in addition?) of `adapter_args`
|
Writing `adapter_args` is hard and unreadable.
|
closed
|
2021-06-21T17:50:34Z
|
2021-06-21T23:27:48Z
|
https://github.com/betodealmeida/shillelagh/issues/24
|
[] |
betodealmeida
| 0
|
aio-libs/aiomysql
|
sqlalchemy
| 86
|
Inconsistant results after insert using sa
|
Using aiomysql.sa with MySQL 5.5.49 I find that when I insert columns into a database then subsequently query the it, the result I get can sometimes include the inserted value and sometimes not.
If I essentially disable the connection pool by setting the minsize and maxsize to 1 then all works as expected. This is however not any use for production.
Attached is a simple test program and schema which should handily reproduce the problem:
[test.py.txt](https://github.com/aio-libs/aiomysql/files/328073/test.py.txt)
[test.sql.txt](https://github.com/aio-libs/aiomysql/files/328074/test.sql.txt)
|
closed
|
2016-06-22T14:24:34Z
|
2017-04-17T10:29:07Z
|
https://github.com/aio-libs/aiomysql/issues/86
|
[] |
nealie
| 4
|
dadadel/pyment
|
numpy
| 50
|
Fails on installation using python 3.6.2
|
Something is broken when installing in Python 3.6.2
In python 2.7.14 it works...
```
[humitos@julia:~]$ pyenv virtualenv 2.7.14 test
New python executable in /home/humitos/.pyenv/versions/2.7.14/envs/test/bin/python2.7
Also creating executable in /home/humitos/.pyenv/versions/2.7.14/envs/test/bin/python
Installing setuptools, pip, wheel...done.
Requirement already satisfied: setuptools in /home/humitos/.pyenv/versions/2.7.14/envs/test/lib/python2.7/site-packages
Requirement already satisfied: pip in /home/humitos/.pyenv/versions/2.7.14/envs/test/lib/python2.7/site-packages
(test) [humitos@julia:~]$ pip install pyment
Collecting pyment
Could not find a version that satisfies the requirement pyment (from versions: )
No matching distribution found for pyment
(test) [humitos@julia:~]$ pip install git+https://github.com/dadadel/pyment.git#egg=pyment
Collecting pyment from git+https://github.com/dadadel/pyment.git#egg=pyment
Cloning https://github.com/dadadel/pyment.git to /tmp/pip-build-MOA7T_/pyment
Installing collected packages: pyment
Running setup.py install for pyment ... done
Successfully installed pyment-0.3.2.dev4
(test) [humitos@julia:~]$
```
but in python 3.6.2 I got this error:
```
[humitos@julia:~]$ pyenv virtualenv 3.6.2 test
Requirement already satisfied: setuptools in /home/humitos/.pyenv/versions/3.6.2/envs/test/lib/python3.6/site-packages
Requirement already satisfied: pip in /home/humitos/.pyenv/versions/3.6.2/envs/test/lib/python3.6/site-packages
(test) [humitos@julia:~]$ pip install git+https://github.com/dadadel/pyment.git#egg=pyment
Collecting pyment from git+https://github.com/dadadel/pyment.git#egg=pyment
Cloning https://github.com/dadadel/pyment.git to /tmp/pip-build-sorkrbkd/pyment
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-build-sorkrbkd/pyment/setup.py", line 5, in <module>
import pyment
File "/tmp/pip-build-sorkrbkd/pyment/pyment/__init__.py", line 1, in <module>
from .pyment import PyComment, __version__, __copyright__, __author__, __licence__
File "/tmp/pip-build-sorkrbkd/pyment/pyment/pyment.py", line 8, in <module>
from docstring import DocString
ModuleNotFoundError: No module named 'docstring'
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-sorkrbkd/pyment/
(test) [humitos@julia:~]$ python
```
Thanks!
|
closed
|
2017-10-12T13:14:02Z
|
2017-10-16T12:43:35Z
|
https://github.com/dadadel/pyment/issues/50
|
[] |
humitos
| 2
|
pallets/flask
|
python
| 5,164
|
Config.from_file() feeds descriptor, not string to tomllib.loads()
|
**outline of what the bug is:**
https://github.com/pallets/flask/blob/main/src/flask/config.py#L236
...has us loading from a .toml file path
**Describe how to replicate the bug.**
https://docs.python.org/3/library/tomllib.html provides `load()` for `rb` files, and `loads()` for .toml strings.
**Describe the expected behavior that should have happened but didn't.**
To get to desired behavior, I added another function to read in the .toml and provide it to tomllib.loads()
```
import tomllib
from flask import Flask
def tomllib_loads(fp):
return tomllib.loads(fp.read())
def create_app(config_filename):
app = Flask(__name__)
app.config.from_file(config_filename, load=tomllib_loads, silent=False, text=True)
@app.route("/hello")
def hello():
return "Hello, World!"
return app
```
Environment:
- Python version: 3.11.3
- Flask version: 2.3.2
|
closed
|
2023-06-14T03:13:23Z
|
2023-06-29T00:06:34Z
|
https://github.com/pallets/flask/issues/5164
|
[] |
smitty1eGH
| 3
|
Kanaries/pygwalker
|
plotly
| 324
|
Copy Code is not putting code into clipboard.
|
I cannot copy code. I have tried several browsers and both are able to copy information. The clipboard contains previous data and is not being updated.
|
closed
|
2023-11-21T19:32:34Z
|
2023-12-08T01:44:55Z
|
https://github.com/Kanaries/pygwalker/issues/324
|
[
"bug",
"fixed but needs feedback"
] |
jjpantano
| 6
|
aio-libs/aiopg
|
sqlalchemy
| 833
|
Update "table" with inverse (logic) doesn't work
|
In aiopg, queries to update table fields doesn't work with self inversion.
I tried to inverse one field like that:
```python
cmd = 'UPDATE schema.table SET field1 = NOT field1 WHERE field2 = %s'
await cur.execute(cmd, args)
```
It didn't work. instead of this way, I had to do this in two queries - _select_ and _update_:
```python
cmd = 'SELECT field1 FROM schema.table WHERE field2 = %s'
res = await cur.execute(cmd, args)
res = not res[0][0]
cmd = 'UPDATE schema.table SET field1 = %s WHERE field2 = %s'
await cur.execute(cmd, (res,) + args)
```
The syntax for `SET field = NOT field` is correct and it works in terminal, but I don't know why it doesn't work here.
## System
system: ubuntu 20.04
python: 3.8
aiogp: 1.2.1
|
open
|
2021-05-06T17:21:11Z
|
2021-05-06T17:21:11Z
|
https://github.com/aio-libs/aiopg/issues/833
|
[] |
leichgardt
| 0
|
jina-ai/serve
|
fastapi
| 5,799
|
可以有多个protocol?
|
f = Flow(port=8501, protocol=['GRPC', "HTTP"]).add(uses=MyExec).add(uses=Cut)
ValueError: You need to specify as much protocols as ports if you want to use a jina built-in gateway
|
closed
|
2023-04-12T12:53:13Z
|
2023-04-12T13:38:44Z
|
https://github.com/jina-ai/serve/issues/5799
|
[] |
yuanjie-ai
| 1
|
ivy-llc/ivy
|
numpy
| 28,560
|
Fix Frontend Failing Test: torch - creation.paddle.assign
|
To-do List: https://github.com/unifyai/ivy/issues/27498
|
closed
|
2024-03-12T11:54:11Z
|
2024-03-21T12:03:59Z
|
https://github.com/ivy-llc/ivy/issues/28560
|
[
"Sub Task"
] |
ZJay07
| 0
|
AirtestProject/Airtest
|
automation
| 293
|
[元素定位]cocos-luk的ui树显示正常,但是通过父节点-子节点的定位方式后,报错提示找不到元素
|
# 描述问题bug
测试设备:android
框架:cocos-luk
本机:mac
**python 版本:** `python3.6`
**airtest 版本:** `1.0.25`
ide:pycharm
# cocos-luk的ui树如截图

# 执行脚本
```
from airtest.core.api import *
from poco.drivers.std import StdPoco
device=init_device(platform="Android",uuid="20DF6FFB")
poco=StdPoco(device=device)
start_app("com.theonepiano.superclassroom")
time.sleep(3)
telinput=poco("<Scene | tag = -1>").child("Button")[1]
print(telinput,"------>",type(telinput))
```
# 报错如截图

# 备注
telinput=poco("<Scene | tag = -1>").child("Button")[1] 换成telinput=poco("Button")[1],则正常可以定位元素,故想弄明白原因
|
open
|
2019-03-04T11:11:40Z
|
2019-03-04T11:13:01Z
|
https://github.com/AirtestProject/Airtest/issues/293
|
[] |
ayue-NAN
| 0
|
chatopera/Synonyms
|
nlp
| 139
|
如何使synonyms常驻进程?
|
每次调用synonyms进行相似度比较的时候,都要重新Building prefix和Loading model,耗时比较久
如何使synonyms常驻进程,这样其他脚本在调用compare()方法的时候不需要重复Building prefix和Loading model
|
closed
|
2023-06-02T09:55:45Z
|
2023-08-13T00:47:50Z
|
https://github.com/chatopera/Synonyms/issues/139
|
[] |
ohblue
| 1
|
Avaiga/taipy
|
data-visualization
| 2,104
|
[OTHER] Using taipy==3.0.0 , how can markdown be rendered in table?
|
### I need to render markdown in table for Taipy==3.0.0 ###
Please help, here is the code:
```python
from taipy.gui import Gui
import markdown
def md_to_html(md_text):
return markdown.markdown(md_text)
md_data = [
"**Bold Text**",
"- List Item 1\n- List Item 2",
"[Link Example](https://www.example.com)",
"*Italic* and **Bold**",
]
html_data = [md_to_html(item) for item in md_data]
page = """
# Taipy Table with HTML (converted from Markdown)
<|{html_data}|table|show_all|markdown|>
"""
Gui(page=page).run(port=8501)
```
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [X] I am willing to work on this issue (optional)
|
open
|
2024-10-20T17:22:07Z
|
2024-10-25T13:09:34Z
|
https://github.com/Avaiga/taipy/issues/2104
|
[
"📄 Documentation",
"🖰 GUI",
"🆘 Help wanted",
"🟨 Priority: Medium",
"✨New feature",
"📝Release Notes"
] |
IshanRattan
| 4
|
pyeve/eve
|
flask
| 950
|
Token Authentication Not working
|
After following the tutorial http://python-eve.org/authentication.html it seems TokenAuth Still uses Basic Auth
Doing a base64 decode of Authorization headers from
`request.headers` result in a username:password
anytime you try to access an endpoint it tells you to put your username password even after token has been sent.
Seems Token Auth Does not work atall
|
closed
|
2016-12-12T17:08:35Z
|
2016-12-12T18:17:51Z
|
https://github.com/pyeve/eve/issues/950
|
[] |
ydaniels
| 1
|
explosion/spaCy
|
deep-learning
| 13,338
|
Summary
|
<!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
## How to reproduce the behaviour
<!-- Include a code example or the steps that led to the problem. Please try to be as specific as possible. --> trying to extract the summary from the Llm task as annotation is there a config. Doc.text isn't doing anything
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System:
* Python Version Used:
* spaCy Version Used:
* Environment Information:
|
closed
|
2024-02-19T12:55:13Z
|
2024-02-22T21:01:31Z
|
https://github.com/explosion/spaCy/issues/13338
|
[
"feat/llm"
] |
drewskidang
| 1
|
python-arq/arq
|
asyncio
| 361
|
Can't deploy ARQ workers on Dokku. redis connection error localhost:6379
|
I've tested ARQ locally and it works just fine.
But when I try to deploy it using Dokku, it can't connect to Redis.
```
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/redis/asyncio/connection.py", line 709, in connect
await self._connect()
File "/usr/local/lib/python3.10/site-packages/redis/asyncio/connection.py", line 744, in _connect
reader, writer = await asyncio.open_connection(
File "/usr/local/lib/python3.10/asyncio/streams.py", line 47, in open_connection
transport, _ = await loop.create_connection(
File "/usr/local/lib/python3.10/asyncio/base_events.py", line 1063, in create_connection
raise OSError('Multiple exceptions: {}'.format(
OSError: Multiple exceptions: [Errno 111] Connect call failed ('127.0.0.1', 6379), [Errno 99] Cannot assign requested address
```
The funny thing is, I'm not using `127.0.0.1`.
```python
from dynaconf import settings
class BaseWorker:
"""Defines the base settings of a worker."""
functions = []
allow_abort_jobs = True
redis_settings = RedisSettings(host=settings.REDIS_HOST)
class FooReportWorker(BaseWorker):
functions = [run_job]
queue_name = 'foo_report_queue'
class BarReportWorker(BaseWorker):
functions = [run_job]
queue_name = 'bar_report_queue'
```
My Procfile contains something like:
```
web: make run
foo-report-worker: arq src.workers.foos_report_worker.FooReportWorker
bar-report-worker: arq src.workers.bar_report_worker.BarReportWorker
```
My env file contains:
```
REDIS_HOST='redis'
```
As I said, got any problems running it locally, but when I try to deploy it, it doesn't work. Seems like it is not even reading from the `redis_settings` attribute.
UPDATE: Just tested it out, I can pass any gibberish in the `host` attribute, no matter what, it always evaluate to `localhost` while initializing.
While running it in my machine, it works, because localhost is what I need in this case.
Running from my local terminal
```
❯ arq src.workers.bar_report_worker.BarReportWorker
21:02:52: Starting worker for 1 functions: run_job
21:02:52: redis_version=7.0.5 mem_usage=1.32M clients_connected=1 db_keys=12
```
It only detects the wrong host parameter when something is scheduled, but it inits normally.
Same thing happens on Dokku, but it fails right in the initialization, since `127.0.0.1:6379` is not available there.
```
-----> Deploying bar-report-worker (count=1)
Attempting pre-flight checks (bar-report-worker.1)
Waiting for 10 seconds (bar-report-worker.1)
af7cb12712a5a3cca2d464d05c895d7566bc0e3252fe9bbd33d5fa0b0dc253a3
! App container failed to start (bar-report-worker.1)
=====> Start of xp-kye container output (bar-report-worker.1)
19:58:13: redis connection error localhost:6379 ConnectionError Error connecting to localhost:6379. Multiple exceptions: [Errno 111] Connect call failed ('127.0.0.1', 6379), [Errno 99] Cannot assign requested address., 5 retries remaining...
19:58:14: redis connection error localhost:6379 ConnectionError Error connecting to localhost:6379. Multiple exceptions: [Errno 111] Connect call failed ('127.0.0.1', 6379), [Errno 99] Cannot assign requested address., 4 retries remaining...
19:58:15: redis connection error localhost:6379 ConnectionError Error connecting to localhost:6379. Multiple exceptions: [Errno 111] Connect call failed ('127.0.0.1', 6379), [Errno 99] Cannot assign requested address., 3 retries remaining...
19:58:16: redis connection error localhost:6379 ConnectionError Error connecting to localhost:6379. Multiple exceptions: [Errno 111] Connect call failed ('127.0.0.1', 6379), [Errno 99] Cannot assign requested address., 2 retries remaining...
19:58:17: redis connection error localhost:6379 ConnectionError Error connecting to localhost:6379. Multiple exceptions: [Errno 111] Connect call failed ('127.0.0.1', 6379), [Errno 99] Cannot assign requested address., 1 retries remaining...
Traceback (most recent call last):
```
|
closed
|
2022-11-16T23:35:22Z
|
2022-11-17T03:38:24Z
|
https://github.com/python-arq/arq/issues/361
|
[] |
RamonGiovane
| 3
|
Kanaries/pygwalker
|
matplotlib
| 605
|
Graphic walker visualization color schema custom change
|
I have a data set with coordinates and signal level values for each coordinates. When I display it in geographic view, I assign to color for each specific group. However I cannot select the color individually, I need to select a color schema. Even I select the the color schema it does not assign the same colors, it sorts the values based on counts then assigns the colors based on this sorting. How can I assign specific color to each individual range? I need quick help to guide me through that project, any help appreciated.
|
open
|
2024-08-10T12:46:49Z
|
2024-08-16T10:30:10Z
|
https://github.com/Kanaries/pygwalker/issues/605
|
[
"enhancement",
"graphic-walker"
] |
barbarosyabaci
| 2
|
syrupy-project/syrupy
|
pytest
| 703
|
having a dot in the file extension causes unexpected behavior
|
**Describe the bug**
Hello,
Thanks for syrupy! We ran into a small issue when customizing the file extension.
As mentioned in the title, I ran into an issue when trying to use an file extension like `png.zip`. I'm thinking, it's related to having and extra `.` in the file extension.
```console
$ pytest tests/syrupy/extensions/image/test_dot_in_extension.py --snapshot-update
=============================================================== test session starts ================================================================
platform darwin -- Python 3.10.9, pytest-7.2.1, pluggy-1.0.0
benchmark: 4.0.0 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /Users/tolga.eren/work/syrupy, configfile: pyproject.toml
plugins: syrupy-4.0.0, xdist-3.1.0, benchmark-4.0.0
collected 1 item
tests/syrupy/extensions/image/test_dot_in_extension.py . [100%]
------------------------------------------------------------- snapshot report summary --------------------------------------------------------------
1 snapshot passed. 1 unused snapshot deleted.
Deleted test_dot_in_file_extension.png (tests/syrupy/extensions/image/__snapshots__/test_dot_in_extension/test_dot_in_file_extension.png.zip)
================================================================ 1 passed in 0.01s =================================================================
```
The unexpected part is here:
1. Reporting says `1 snapshot passed. 1 unused snapshot deleted.`: There wasn't an unused snapshot and it wasn't deleted
2. If I run the `--snapshot--update` again, now it deletes the snapshot file, which it shoudn't.
**To reproduce**
I've modified one of the existing tests to reproduce:
```python
# tests/syrupy/extensions/image/test_dot_in_extension.py
import base64
import pytest
from syrupy.extensions.single_file import SingleFileSnapshotExtension
class DotInFileExtension(SingleFileSnapshotExtension):
_file_extension = "png.zip"
actual_png = base64.b64decode(
b"iVBORw0KGgoAAAANSUhEUgAAADIAAAAyBAMAAADsEZWCAAAAG1BMVEXMzMy"
b"Wlpaqqqq3t7exsbGcnJy+vr6jo6PFxcUFpPI/AAAACXBIWXMAAA7EAAAOxA"
b"GVKw4bAAAAQUlEQVQ4jWNgGAWjgP6ASdncAEaiAhaGiACmFhCJLsMaIiDAE"
b"QEi0WXYEiMCOCJAJIY9KuYGTC0gknpuHwXDGwAA5fsIZw0iYWYAAAAASUVO"
b"RK5CYII="
)
@pytest.fixture
def snapshot_dot_in_file_extension(snapshot):
return snapshot.use_extension(DotInFileExtension)
def test_dot_in_file_extension(snapshot_dot_in_file_extension):
assert actual_png == snapshot_dot_in_file_extension
```
Run `pytest tests/syrupy/extensions/image/test_dot_in_extension.py --snapshot-update` twice to observe the unexpected behavior.
**Expected behavior**
1. Correct reporting as in : `1 snapshot generated.`
2. and not deleting the generated snapshot in the second update run.
**Environment (please complete the following information):**
I've tested in the main branch
|
closed
|
2023-02-08T08:19:49Z
|
2023-02-21T00:29:48Z
|
https://github.com/syrupy-project/syrupy/issues/703
|
[
"bug",
"released"
] |
tolgaeren
| 4
|
scikit-hep/awkward
|
numpy
| 3,329
|
In docs.yml, "Build C++ WASM" fails with `ninja` not found
|
For example, in this: https://github.com/scikit-hep/awkward/actions/runs/12188354714/job/34002123054
The failing log is:
```
Run # pyodide-build doesn't work out of the box with pipx
# pyodide-build doesn't work out of the box with pipx
CFLAGS=-fexceptions LDFLAGS=-fexceptions pyodide build --exports whole_archive
shell: /usr/bin/bash -e {0}
env:
X86_64_PYTHON_VERSION: 3.11.0
SOURCE_DATE_EPOCH: 1668811[2](https://github.com/scikit-hep/awkward/actions/runs/12188354714/job/34002123054#step:9:2)11
pythonLocation: /opt/hostedtoolcache/Python/3.11.2/x64
PKG_CONFIG_PATH: /opt/hostedtoolcache/Python/[3](https://github.com/scikit-hep/awkward/actions/runs/12188354714/job/34002123054#step:9:3).11.2/x64/lib/pkgconfig
Python_ROOT_DIR: /opt/hostedtoolcache/Python/3.11.2/x6[4](https://github.com/scikit-hep/awkward/actions/runs/12188354714/job/34002123054#step:9:4)
Python2_ROOT_DIR: /opt/hostedtoolcache/Python/3.11.2/x64
Python3_ROOT_DIR: /opt/hostedtoolcache/Python/3.11.2/x64
LD_LIBRARY_PATH: /opt/hostedtoolcache/Python/3.11.2/x64/lib
PATH: /home/runner/work/_temp/[5](https://github.com/scikit-hep/awkward/actions/runs/12188354714/job/34002123054#step:9:5)5485d84-5b5a-4[6](https://github.com/scikit-hep/awkward/actions/runs/12188354714/job/34002123054#step:9:6)3f-b5b9-a3a3e161e9e8/emsdk-main:/home/runner/work/_temp/55485d84-5b5a-463f-b5b9-a3a3e161e9e8/emsdk-main/upstream/emscripten:/opt/hostedtoolcache/Python/3.11.2/x64/bin:/opt/hostedtoolcache/Python/3.11.2/x64:/snap/bin:/home/runner/.local/bin:/opt/pipx_bin:/home/runner/.cargo/bin:/home/runner/.config/composer/vendor/bin:/usr/local/.ghcup/bin:/home/runner/.dotnet/tools:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
EMSDK: /home/runner/work/_temp/55485d84-5b5a-463f-b5b9-a3a3e161e9e8/emsdk-main
EMSDK_NODE: /home/runner/work/_temp/55485d84-5b5a-463f-b5b9-a3a3e161e9e8/emsdk-main/node/20.18.0_64bit/bin/node
*** scikit-build-core 0.10.[7](https://github.com/scikit-hep/awkward/actions/runs/12188354714/job/34002123054#step:9:7) using CMake 3.31.1 (wheel)
*** Configuring CMake...
configure: cmake -DCMAKE_C_COMPILER=/tmp/tmpkwce4mjl/cc -DCMAKE_CXX_COMPILER=/tmp/tmpkwce4mjl/c++ -DCMAKE_AR=/tmp/tmpkwce4mjl/ar -DCMAKE_C_COMPILER_AR=/tmp/tmpkwce4mjl/ar -DCMAKE_CXX_COMPILER_AR=/tmp/tmpkwce4mjl/ar --fresh -S. -Bbuild/cpython-311 -DCMAKE_BUILD_TYPE:STRING=Release -Cbuild/cpython-311/CMakeInit.txt -DCMAKE_INSTALL_PREFIX=/tmp/tmpn02_aikz/wheel/platlib -DCMAKE_MAKE_PROGRAM=ninja -DCMAKE_CROSSCOMPILING_EMULATOR=/home/runner/work/_temp/554[8](https://github.com/scikit-hep/awkward/actions/runs/12188354714/job/34002123054#step:9:8)5d84-5b5a-463f-b5b9-a3a3e161e9e8/emsdk-main/node/20.18.0_64bit/bin/node
loading initial cache file build/cpython-311/CMakeInit.txt
CMake Error at CMakeLists.txt:6 (project):
Running
'ninja' '--version'
failed with:
no such file or directory
-- Configuring incomplete, errors occurred!
emcmake: error: 'cmake -DCMAKE_C_COMPILER=/tmp/tmpkwce4mjl/cc -DCMAKE_CXX_COMPILER=/tmp/tmpkwce4mjl/c++ -DCMAKE_AR=/tmp/tmpkwce4mjl/ar -DCMAKE_C_COMPILER_AR=/tmp/tmpkwce4mjl/ar -DCMAKE_CXX_COMPILER_AR=/tmp/tmpkwce4mjl/ar --fresh -S. -Bbuild/cpython-311 -DCMAKE_BUILD_TYPE:STRING=Release -Cbuild/cpython-311/CMakeInit.txt -DCMAKE_INSTALL_PREFIX=/tmp/tmpn02_aikz/wheel/platlib -DCMAKE_MAKE_PROGRAM=ninja -DCMAKE_CROSSCOMPILING_EMULATOR=/home/runner/work/_temp/55485d84-5b5a-463f-b5b[9](https://github.com/scikit-hep/awkward/actions/runs/12188354714/job/34002123054#step:9:9)-a3a3e161e9e8/emsdk-main/node/20.18.0_64bit/bin/node' failed (returned 1)
*** CMake configuration failed
* Creating virtualenv isolated environment...
* Installing packages in isolated environment... (pybind11, scikit-build-core>=0.[10](https://github.com/scikit-hep/awkward/actions/runs/12188354714/job/34002123054#step:9:10))
* Getting dependencies for wheel...
* Installing packages in isolated environment... (ninja>=1.5)
* Building wheel...
ERROR Backend subproccess exited when trying to invoke build_wheel
Error: Process completed with exit code 1.
```
and it was configured somewhere in
https://github.com/scikit-hep/awkward/blob/661702bd9f448333bf48e9768cdc0c98603470c3/.github/workflows/docs.yml#L61-L118
I think it's failing specifically on
https://github.com/scikit-hep/awkward/blob/661702bd9f448333bf48e9768cdc0c98603470c3/.github/workflows/docs.yml#L111
so `pyodide build`. Is this a change in Pyodide? Does it now need Ninja to be installed, and would that be easy to add?
|
closed
|
2024-12-05T21:59:31Z
|
2024-12-16T18:10:01Z
|
https://github.com/scikit-hep/awkward/issues/3329
|
[
"bug",
"tests"
] |
jpivarski
| 2
|
benbusby/whoogle-search
|
flask
| 532
|
[BUG] Tor searches fail without message
|
**Describe the bug**
I have Whoogle running in an Unraid container and the main GUI comes up fine, but whenever I search for something it gets stuck and never finds anything. I saw there was a recent update to the container so I did that but it didn't fix the problem.
**Deployment Method**
- [ ] Heroku (one-click deploy)
- [X] Docker
- [ ] `run` executable
- [ ] pip/pipx
- [ ] Other: [describe setup]
**Version of Whoogle Search**
- [X] Latest build from [source] (i.e. GitHub, Docker Hub, pip, etc)
- [X] Version 0.6.0
- [ ] Not sure
**Desktop (please complete the following information):**
- OS: Windows
- Browser: Brave
- Version: 1.31.91 (based on Chromium: 95.0.4638.69)
**Smartphone (please complete the following information):**
- Device: iPhone XR
- OS: iOS 15.1
- Browser: Brave
- Version: 1.31.91 (based on Chromium: 95.0.4638.69)
**Additional context**
Add any other context about the problem here.
|
closed
|
2021-11-11T23:30:40Z
|
2021-11-13T00:21:11Z
|
https://github.com/benbusby/whoogle-search/issues/532
|
[
"bug"
] |
thunderclap82
| 8
|
OpenBB-finance/OpenBB
|
python
| 6,972
|
[Bug]TooManyRedirects -> Exceeded 30 redirects.
|
**Describe the bug**

/api/v1/equity/darkpool/otc
can't get data correctly
|
open
|
2024-11-28T02:08:07Z
|
2024-11-28T02:52:59Z
|
https://github.com/OpenBB-finance/OpenBB/issues/6972
|
[] |
joshuaBri
| 1
|
ultralytics/ultralytics
|
machine-learning
| 19,071
|
How to limit ultralytics yolo11 to use single thread (cpu)
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
How to limit yolo11 ultralytics to use single thread (CPU) during prediction process.
I have a project using image classifications that I need to limit the use of server resources (threads).
I tried:
results = model(source, device='cpu', workers=1)
or:
export OPENBLAS_NUM_THREADS=1
export NUMEXPR_NUM_THREADS=1
export VECLIB_MAXIMUM_THREADS=1
export NUM_THREADS=1
or:
import torch
torch.set_num_threads(1)
torch.set_num_interop_threads(1)
For the test, the application loads a class that does the inference and another that keeps sending the same image and collecting the result:
```python
def run(self):
while True:
json_result = self.app.classify(passage_image_byte)
if len(json_result) == 0:
self.status.inc_image_fail_count()
else:
self.status.inc_image_class_count()
```
See below the CPU usage during the test:

### Additional
_No response_
|
closed
|
2025-02-04T18:05:06Z
|
2025-02-05T17:09:47Z
|
https://github.com/ultralytics/ultralytics/issues/19071
|
[
"question",
"classify"
] |
danielroson
| 3
|
fbdesignpro/sweetviz
|
data-visualization
| 89
|
Add Column descriptions
|
It would be great to bea ble to provide a list / dict of column descriptions which could be displayed on the html report
|
open
|
2021-05-09T23:24:37Z
|
2021-05-26T17:17:42Z
|
https://github.com/fbdesignpro/sweetviz/issues/89
|
[
"feature request"
] |
steve-evers
| 0
|
OpenInterpreter/open-interpreter
|
python
| 1,385
|
wtf script not working
|
### Describe the bug
I made an error on a command, typed `wtf` and received two errors
1) LLM responded with `Please provide the terminal history or describe the specific issue you're facing so I can assist you effectively.%`
2) A python error, followed by an infinite spinner
```
Traceback (most recent call last):
File "/Users/mike/Library/Python/3.11/bin/wtf", line 8, in <module>
sys.exit(main())
^^^^^^
File "/Users/mike/Library/Python/3.11/lib/python/site-packages/scripts/wtf.py", line 81, in main
history = history[-9000:].strip()
~~~~~~~^^^^^^^^
TypeError: 'NoneType' object is not subscriptable
```
### Reproduce
I use Alacritty with zsh
### Expected behavior
wtf reads the terminal history and fixes my mistakes
### Screenshots
<img width="952" alt="image" src="https://github.com/user-attachments/assets/718452d5-fb1c-49a6-b233-341cd216b2be">
### Open Interpreter version
Open Interpreter 0.3.4 Local III
### Python version
Python 3.11.7
### Operating System name and version
MacOS
### Additional context
_No response_
|
closed
|
2024-08-06T13:29:51Z
|
2024-11-05T18:49:28Z
|
https://github.com/OpenInterpreter/open-interpreter/issues/1385
|
[] |
MikeBirdTech
| 0
|
opengeos/leafmap
|
plotly
| 662
|
.edit_vector's behavior is wired
|
<!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
- leafmap version: 0.30.1
- solara: 1.25.1
- Python version: 3.11.3
- Operating System: MacOS. 13.6.3
### Description
- A widget for the user to select polygon from a list to edit,
- however, after adding one polygon, the next one doesn't appear but the 3rd go back.
- seems leafmap "eats" one polygon
### What I Did
create a solara app as below
$ solara run example.py
```
import solara
import solara as sl
from solara.components.file_drop import FileInfo
from solara import Reactive, reactive
import leafmap
import os, tempfile, sys
from io import BytesIO
from typing import Union
import random, numpy as np
from ipywidgets import widgets
import geojson, json
from shapely.geometry import shape
import shapely.wkt
import pandas as pd
import time
BUTTON_KWARGS = dict(color="primary", text=True, outlined=True)
class State:
zoom = reactive(20)
center = reactive((None, None))
enroll_wkt = reactive(None)
def wkt_to_featurecollection(wkt):
geom = shapely.wkt.loads(wkt)
return {
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {},
"geometry": geom.__geo_interface__,
}
],
}
aoi = 'POLYGON ((-91.16138525535435 37.81442211215915, -91.16138525535435 37.73515728531591, -90.85526326612401 37.73515728531591, -90.85526326612401 37.81442211215915, -91.16138525535435 37.81442211215915))'
wkt_list = ['POLYGON ((-91.15796462083297 37.806056428087615, -91.15796462083297 37.79771581956473, -90.86679686670833 37.79771581956473, -90.86679686670833 37.806056428087615, -91.15796462083297 37.806056428087615))',
'POLYGON ((-91.11222224140039 37.792622288824845, -91.11222224140039 37.76260439211525, -91.02064573377882 37.76260439211525, -91.02064573377882 37.792622288824845, -91.11222224140039 37.792622288824845))',
'POLYGON ((-91.00305251600666 37.79041596911006, -91.0496745431024 37.79041596911006, -91.0496745431024 37.74730356543847, -91.00305251600666 37.74730356543847, -91.00305251600666 37.79041596911006)))']
def widget_droplist(options, desc, width = "270px", padding = "0px 0px 0px 5px", **kwargs):
return widgets.Dropdown(
options=[""] + options,
description=desc,
style={"description_width": "initial"},
layout=widgets.Layout(width=width, padding=padding),
**kwargs)
def add_widgets(m, padding = "0px 0px 0px 5px"):
style = {"description_width": "initial"}
geom_sel = widget_droplist(['1','2','3'], "geometry:")
export_button = widgets.Button(description="Click 'Save' before Export", layout=widgets.Layout(width="200px"))
reset_button = widgets.Button(
description="clear", layout=widgets.Layout(width="50px"), button_style="info"
)
func_box = widgets.HBox([export_button, reset_button])
output = widgets.Output()
# zoom to the footprint
m.add_geojson(
wkt_to_featurecollection(aoi),
layer_name="Footprint",
zoom_to_layer=True,
# hover_style={'opacity':0.9},
style_callback=lambda feat: {"color": "red","opacity":0.9, 'hover_style':{'opacity':0.9}},
)
def select_boundary(change):
m.remove_layer(m.find_layer("Footprint"))
m.draw_control.clear()
m.draw_features = []
# m.user_rois = None
# m.user_roi = None
# time.sleep(0.1)
if change.new == "1":
feature_collection = wkt_to_featurecollection(wkt_list[0])
m.edit_vector(feature_collection)#, layer_name="Footprint")
elif change.new == "2":
feature_collection = wkt_to_featurecollection(wkt_list[1])
m.edit_vector(feature_collection)#, layer_name="Footprint2")
elif change.new == "3":
feature_collection = wkt_to_featurecollection(wkt_list[2])
m.edit_vector(feature_collection)#, layer_name="Footprint2")
else: # "empty"
# m.draw_control.clear()
pass
# output.append_stdout(State.series_df.value.iloc[0]['mask'])
output.append_stdout(change.new)
geom_sel.observe(select_boundary, names="value")
def export_wkt(e):
# -1: latest saved edits
g1 = shape(m.draw_features[-1]['geometry'])
output.outputs = ()
output.append_stdout(g1.wkt)
export_button.on_click(export_wkt)
def reset_output(e):
output.outputs = ()
reset_button.on_click(reset_output)
box = widgets.VBox(
[
geom_sel,
func_box,
output,
]
)
m.add_widget(box, position="topright", add_header=False)
class Map(leafmap.Map):
def __init__(self, **kwargs):
kwargs["toolbar_control"] = False
super().__init__(**kwargs)
basemap = {
"url": "https://mt1.google.com/vt/lyrs=s&x={x}&y={y}&z={z}",
"attribution": "Google",
"name": "Google Satellite",
}
self.add_tile_layer(**basemap, shown=True)
add_widgets(self)
@sl.component
def Page() -> None:
solara.Markdown("""- A widget for the user to select polygon from a list to edit, \n- however, after adding one polygon, the next one doesn't appear but the 3rd go back.\n- seems leafmap "eats" one polygon""")
Map.element( # type: ignore
zoom=State.zoom.value,
scroll_wheel_zoom=True,
toolbar_ctrl=False,
data_ctrl=False,
height="780px",
)
if __name__ == "__main__":
Page()
```
|
closed
|
2024-01-18T14:05:45Z
|
2024-01-23T17:57:57Z
|
https://github.com/opengeos/leafmap/issues/662
|
[
"bug"
] |
suredream
| 9
|
onnx/onnx
|
machine-learning
| 6,074
|
Pause all PR merges
|
PR merge is paused until the CI pipelines are fixed.
cc @gramalingam
|
closed
|
2024-04-10T17:50:12Z
|
2024-04-12T16:15:11Z
|
https://github.com/onnx/onnx/issues/6074
|
[] |
justinchuby
| 1
|
biolab/orange3
|
numpy
| 6,992
|
trubar and setup.py
|
Currently you can't run setup.py without installing trubar first. The import `from trubar import translate` should be moved to some local scope, where translate is actually used or handled in a similar manner as sphinx, which is imported in a try block.
|
closed
|
2025-01-17T10:04:07Z
|
2025-01-19T22:42:36Z
|
https://github.com/biolab/orange3/issues/6992
|
[] |
thocevar
| 1
|
ets-labs/python-dependency-injector
|
flask
| 658
|
Attempting to inject, getting: "AttributeError: 'Provide' object has no attribute X"
|
Trying to understand how to correctly inject dependencies into a class constructor, the following is a sandbox example where I've extracted the key parts from a broader project.
```python
from dependency_injector import containers, providers
from dependency_injector.wiring import Provide, inject
class Config():
def __init__(self, config_file: str = None):
self.config_file = config_file
class Container(containers.DeclarativeContainer):
config = providers.Singleton(Config)
class Engine():
@inject
def __init__(
self,
config: Config = Provide[Container.config],
):
self.config = config
def init(self) -> None:
print(f'Reading from config: {self.config.config_file}')
def container_factory(config_file: str = None) -> Container:
container = containers.DynamicContainer()
config_provider = providers.Factory(Config, config_file=config_file) \
if config_file else providers.Factory(Config)
container.config = config_provider
return container
def main():
container = container_factory('/tmp/config.yml')
container.wire(modules=[__name__])
engine = Engine()
engine.init()
if __name__ == '__main__':
main()
```
I'm getting an error here from the init log:
`AttributeError: 'Provide' object has no attribute 'config_file'`
My expected behavior is that `config` is an instance of the Config class that has been passed the `config_file` value in its constructor. Instead I seem to be given an instance of the Provide class?
I have a few questions about this:
* The documentation is very vague about what wire actually does, and what it expects for its modules/packages arguments. What *should* these values be? The module and/or package that needs to have dependencies injected?
* I'm assuming the problem here is that the wire has failed so it's unable to inject correctly?
* I'm having a hard time finding examples that demonstrate where the `@inject` decorator needs to go for a class -- is it on the class declaration or on the actual `__init__` def, if I wish to inject into the constructor?
If it's relevant, I use `virtualenv .venv` to create a virtual env at the root of the project and am using `poetry install` and `poetry shell` to load the environment and run it.
|
open
|
2023-01-02T03:24:32Z
|
2023-05-25T12:48:49Z
|
https://github.com/ets-labs/python-dependency-injector/issues/658
|
[] |
eriknelson
| 1
|
scikit-learn-contrib/metric-learn
|
scikit-learn
| 1
|
python3 setup.py test fails
|
Installation works fine using homebrew python3. But when I run the tests I get the errors below. Any idea?
---
running test
running egg_info
writing top-level names to metric_learn.egg-info/top_level.txt
writing requirements to metric_learn.egg-info/requires.txt
writing metric_learn.egg-info/PKG-INFO
writing dependency_links to metric_learn.egg-info/dependency_links.txt
reading manifest file 'metric_learn.egg-info/SOURCES.txt'
writing manifest file 'metric_learn.egg-info/SOURCES.txt'
running build_ext
Traceback (most recent call last):
File "setup.py", line 23, in <module>
test_suite='test'
File "/usr/local/Cellar/python3/3.4.2/Frameworks/Python.framework/Versions/3.4/lib/python3.4/distutils/core.py", line 148, in setup
dist.run_commands()
File "/usr/local/Cellar/python3/3.4.2/Frameworks/Python.framework/Versions/3.4/lib/python3.4/distutils/dist.py", line 955, in run_commands
self.run_command(cmd)
File "/usr/local/Cellar/python3/3.4.2/Frameworks/Python.framework/Versions/3.4/lib/python3.4/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/usr/local/lib/python3.4/site-packages/setuptools/command/test.py", line 138, in run
self.with_project_on_sys_path(self.run_tests)
File "/usr/local/lib/python3.4/site-packages/setuptools/command/test.py", line 118, in with_project_on_sys_path
func()
File "/usr/local/lib/python3.4/site-packages/setuptools/command/test.py", line 164, in run_tests
testLoader = cks
File "/usr/local/Cellar/python3/3.4.2/Frameworks/Python.framework/Versions/3.4/lib/python3.4/unittest/main.py", line 92, in **init**
self.parseArgs(argv)
File "/usr/local/Cellar/python3/3.4.2/Frameworks/Python.framework/Versions/3.4/lib/python3.4/unittest/main.py", line 139, in parseArgs
self.createTests()
File "/usr/local/Cellar/python3/3.4.2/Frameworks/Python.framework/Versions/3.4/lib/python3.4/unittest/main.py", line 146, in createTests
self.module)
File "/usr/local/Cellar/python3/3.4.2/Frameworks/Python.framework/Versions/3.4/lib/python3.4/unittest/loader.py", line 146, in loadTestsFromNames
suites = [self.loadTestsFromName(name, module) for name in names]
File "/usr/local/Cellar/python3/3.4.2/Frameworks/Python.framework/Versions/3.4/lib/python3.4/unittest/loader.py", line 146, in <listcomp>
suites = [self.loadTestsFromName(name, module) for name in names]
File "/usr/local/Cellar/python3/3.4.2/Frameworks/Python.framework/Versions/3.4/lib/python3.4/unittest/loader.py", line 117, in loadTestsFromName
return self.loadTestsFromModule(obj)
File "/usr/local/lib/python3.4/site-packages/setuptools/command/test.py", line 35, in loadTestsFromModule
tests.append(self.loadTestsFromName(submodule))
File "/usr/local/Cellar/python3/3.4.2/Frameworks/Python.framework/Versions/3.4/lib/python3.4/unittest/loader.py", line 114, in loadTestsFromName
parent, obj = obj, getattr(obj, part)
AttributeError: 'module' object has no attribute 'metric_learn_test'
|
closed
|
2014-10-14T15:28:03Z
|
2014-10-14T18:02:07Z
|
https://github.com/scikit-learn-contrib/metric-learn/issues/1
|
[] |
MartinHjelm
| 4
|
QuivrHQ/quivr
|
api
| 3,256
|
Process web crawled url
|
* process file should process urls and audio files
|
closed
|
2024-09-25T09:29:32Z
|
2024-09-25T18:56:16Z
|
https://github.com/QuivrHQ/quivr/issues/3256
|
[
"area: backend"
] |
linear[bot]
| 1
|
unit8co/darts
|
data-science
| 2,147
|
[Question] Future covariates series length requirements when ocl > 1
|
**Describe the bug**
I have a problem with the following scenario (this is my real scenario):
I have a monthly training dataset up to 31-05-2018.
I want to forecast 7 months with `output_chunk_length`=7.
So, my future index is ['2018-06-30', '2018-07-31', '2018-08-31', '2018-09-30', '2018-10-31', '2018-11-30', '2018-12-31'].
```{python}
forecaster = RegressionModel(lags=12,
lags_past_covariates=None,
lags_future_covariates=[12],
output_chunk_length=7,
add_encoders=None,
model=KNeighborsRegressor(p=1),
multi_models=False,
use_static_covariates=False)
```
When I call `forecaster.predict(n=7)` after `fit` I receive the following error:
```{python}
ValueError: The corresponding future_covariate of the series at index 0 isn't sufficiently long. Given horizon `n=7`, `min(lags_future_covariates)=12`, `max(lags_future_covariates)=12` and `output_chunk_length=7`, the future_covariate has to range from 2018-12-31 00:00:00 until 2019-06-30 00:00:00 (inclusive), but it ranges only from 2013-06-30 00:00:00 until 2018-12-31 00:00:00.
```
**To Reproduce**
```{python}
from darts.datasets import WeatherDataset
from darts.models import RegressionModel
from sklearn.linear_model import Ridge
series = WeatherDataset().load()
# predicting atmospheric pressure
target = series['p (mbar)'][:100]
# optionally, use past observed rainfall (pretending to be unknown beyond index 100)
past_cov = series['rain (mm)'][:100]
# optionally, use future temperatures (pretending this component is a forecast)
future_cov = series['T (degC)'][:107]
# wrap around the sklearn Ridge model
model = RegressionModel(
model=Ridge(),
lags=12,
lags_past_covariates=4,
lags_future_covariates=[7], #if the lags_future_covariates is 7 or greater, the code fails. 7 is the output_chunk_length
output_chunk_length=7
)
model.fit(target, past_covariates=past_cov, future_covariates=future_cov)
pred = model.predict(7)
pred.values()
```
If n <= n_output_chunk_length, for `future_covariates` I would need to provide at least the same time span as the target plus the maximum between the next `n` step and `lags_future_covariates` + 1.
**Expected behavior**
When using future_covariates, I only would need to provide the next `n=7` index from future_dataset. Right now, it is asking for an extra 13 days after 31-05-2018.
I followed the documentation:

**System (please complete the following information):**
- Python version: 3.9.12
- darts version 0.27.1
|
closed
|
2024-01-08T21:08:01Z
|
2024-04-17T07:00:15Z
|
https://github.com/unit8co/darts/issues/2147
|
[
"question"
] |
guilhermeparreira
| 2
|
python-gino/gino
|
asyncio
| 575
|
How do I Batch Update
|
* GINO version: 0.8.3
* Python version: 3.6.6
* asyncpg version: 0.19.0
* PostgreSQL version: 11.5
I have the following model:
```
class ProviderHotel(db.Model):
__bind_key__ = 'hotels'
__tablename__ = 'provider_hotels'
id = db.Column(db.Integer, primary_key=True)
provider = db.Column(db.String)
travolutionary_id = db.Column(db.String)
hotel_id = db.Column(db.Integer)
provider_hotel_id = db.Column(db.String)
...
```
I have the following python object:
```
obj = {
'1': {
'provider_hotel_id': 1,
...
},
'2': {
'provider_hotel_id': 2,
...
},
'3': {
'provider_hotel_id': 3,
...
},
}
```
I would like to filter all the entries in the db which have `provider_hotel_id = obj.keys()` then update them by unpacking their respective `obj.value()`. I know this can be accomplished by first querying the db then updating each entry and then using `.apply()` command, but I am wondering if there is any way I can do a batch update as it is expensive to commit for each row updated (for large number of rows).
|
closed
|
2019-10-18T02:24:44Z
|
2019-10-20T05:12:54Z
|
https://github.com/python-gino/gino/issues/575
|
[
"question"
] |
nimwijetunga
| 10
|
assafelovic/gpt-researcher
|
automation
| 701
|
llama3 keyerror 'server'
|
I faces this Error during running gpt-researcher.
* I am using the latest version (github)
my `.env` file
```bash
EMBEDDING_PROVIDER=ollama
# Specify one of the embedding models supported by Ollama
OLLAMA_EMBEDDING_MODEL=nomic-embed-text
# Use ollama for both, LLM and EMBEDDING provider
LLM_PROVIDER=ollama
# Ollama endpoint to use
OLLAMA_BASE_URL=http://localhost:11434
# Specify one of the LLM models supported by Ollama
FAST_LLM_MODEL=llama3:70b
# Specify one of the LLM models supported by Ollama
SMART_LLM_MODEL=llama3:70b
# The temperature to use, defaults to 0.55
TEMPERATURE=0.55
# Retriver
RETRIEVER=tavily
```
The error itself
```
$ uvicorn main:app
INFO: Started server process [1211335]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: 156.218.212.11:0 - "GET / HTTP/1.1" 200 OK
INFO: 156.218.212.11:0 - "GET /static/gptr-logo.png HTTP/1.1" 304 Not Modified
INFO: 156.218.212.11:0 - "GET /site/scripts.js HTTP/1.1" 304 Not Modified
INFO: 156.218.212.11:0 - "GET / HTTP/1.1" 200 OK
INFO: ('156.218.212.11', 0) - "WebSocket /ws" [accepted]
INFO: connection open
🔎 Starting the research task for 'Automatic Speech Recognition SOTA Models'...
⚠️ Error in reading JSON, attempting to repair JSON
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/home/wakeb/text-workspace/gpt-researcher/gpt_researcher/master/actions.py", line 107, in choose_agent
agent_dict = json.loads(response)
^^^^^^^^^^^^^^^^^^^^
File "/home/wakeb/miniconda3/envs/gpt-research/lib/python3.11/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/wakeb/miniconda3/envs/gpt-research/lib/python3.11/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/wakeb/miniconda3/envs/gpt-research/lib/python3.11/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/wakeb/miniconda3/envs/gpt-research/lib/python3.11/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 244, in run_asgi
result = await self.app(self.scope, self.asgi_receive, self.asgi_send) # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/wakeb/miniconda3/envs/gpt-research/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 70, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/wakeb/miniconda3/envs/gpt-research/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/home/wakeb/miniconda3/envs/gpt-research/lib/python3.11/site-packages/starlette/applications.py", line 123, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/wakeb/miniconda3/envs/gpt-research/lib/python3.11/site-packages/starlette/middleware/errors.py", line 151, in __call__
await self.app(scope, receive, send)
File "/home/wakeb/miniconda3/envs/gpt-research/lib/python3.11/site-packages/starlette/middleware/cors.py", line 77, in __call__
await self.app(scope, receive, send)
File "/home/wakeb/miniconda3/envs/gpt-research/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/home/wakeb/miniconda3/envs/gpt-research/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/home/wakeb/miniconda3/envs/gpt-research/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/home/wakeb/miniconda3/envs/gpt-research/lib/python3.11/site-packages/starlette/routing.py", line 756, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/wakeb/miniconda3/envs/gpt-research/lib/python3.11/site-packages/starlette/routing.py", line 776, in app
await route.handle(scope, receive, send)
File "/home/wakeb/miniconda3/envs/gpt-research/lib/python3.11/site-packages/starlette/routing.py", line 373, in handle
await self.app(scope, receive, send)
File "/home/wakeb/miniconda3/envs/gpt-research/lib/python3.11/site-packages/starlette/routing.py", line 96, in app
await wrap_app_handling_exceptions(app, session)(scope, receive, send)
File "/home/wakeb/miniconda3/envs/gpt-research/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/home/wakeb/miniconda3/envs/gpt-research/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/home/wakeb/miniconda3/envs/gpt-research/lib/python3.11/site-packages/starlette/routing.py", line 94, in app
await func(session)
File "/home/wakeb/miniconda3/envs/gpt-research/lib/python3.11/site-packages/fastapi/routing.py", line 348, in app
await dependant.call(**values)
File "/home/wakeb/text-workspace/gpt-researcher/backend/server.py", line 88, in websocket_endpoint
report = await manager.start_streaming(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/wakeb/text-workspace/gpt-researcher/backend/websocket_manager.py", line 60, in start_streaming
report = await run_agent(task, report_type, report_source, tone, websocket, headers)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/wakeb/text-workspace/gpt-researcher/backend/websocket_manager.py", line 85, in run_agent
report = await researcher.run()
^^^^^^^^^^^^^^^^^^^^^^
File "/home/wakeb/text-workspace/gpt-researcher/backend/report_type/detailed_report/detailed_report.py", line 59, in run
await self._initial_research()
File "/home/wakeb/text-workspace/gpt-researcher/backend/report_type/detailed_report/detailed_report.py", line 80, in _initial_research
await self.main_task_assistant.conduct_research()
File "/home/wakeb/text-workspace/gpt-researcher/gpt_researcher/master/agent.py", line 109, in conduct_research
self.agent, self.role = await choose_agent(
^^^^^^^^^^^^^^^^^^^
File "/home/wakeb/text-workspace/gpt-researcher/gpt_researcher/master/actions.py", line 112, in choose_agent
return await handle_json_error(response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/wakeb/text-workspace/gpt-researcher/gpt_researcher/master/actions.py", line 127, in handle_json_error
return json_data["server"], json_data["agent_role_prompt"]
~~~~~~~~~^^^^^^^^^^
KeyError: 'server'
INFO: connection closed
```
I think the error because gpt-researcher assume to return JSON, but it got text, so the problem might be in prompt format for llama3. Suffixing the prompt by return the answer in JSON might solves the problem [see](https://github.com/ollama/ollama/blob/main/docs/api.md#request-2)
|
closed
|
2024-07-24T11:58:38Z
|
2024-08-13T06:17:14Z
|
https://github.com/assafelovic/gpt-researcher/issues/701
|
[] |
Abdullahaml1
| 2
|
google-research/bert
|
tensorflow
| 1,241
|
/bert_config.json; No such file or directory
|
I tried to use BERT on Google colab. I got the model and glue data folder setup like as follow:
[!export BERT_BASE_DIR=/content/uncased_L-12_H-768_A-12
!export GLUE_DIR=/content/glue_data
!python run_classifier.py \
--task_name=MRPC \
--do_train=true \
--do_eval=true \
--data_dir=$GLUE_DIR/MRPC \
--vocab_file=$BERT_BASE_DIR/vocab.txt \
--bert_config_file=$BERT_BASE_DIR/bert_config.json \
--init_checkpoint=$BERT_BASE_DIR/bert_model.ckpt \
--max_seq_length=128 \
--train_batch_size=32 \
--learning_rate=2e-5 \
--num_train_epochs=3.0 \
--output_dir=/tmp/mrpc_output/](url)
But it gives this error:
[/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
/usr/local/lib/python3.7/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/usr/local/lib/python3.7/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/usr/local/lib/python3.7/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/usr/local/lib/python3.7/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/usr/local/lib/python3.7/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/usr/local/lib/python3.7/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
WARNING:tensorflow:From /content/optimization.py:87: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.
WARNING:tensorflow:From run_classifier.py:981: The name tf.app.run is deprecated. Please use tf.compat.v1.app.run instead.
WARNING:tensorflow:From run_classifier.py:784: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead.
W0629 15:14:32.336232 140241404368768 deprecation_wrapper.py:119] From run_classifier.py:784: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead.
WARNING:tensorflow:From run_classifier.py:784: The name tf.logging.INFO is deprecated. Please use tf.compat.v1.logging.INFO instead.
W0629 15:14:32.336425 140241404368768 deprecation_wrapper.py:119] From run_classifier.py:784: The name tf.logging.INFO is deprecated. Please use tf.compat.v1.logging.INFO instead.
WARNING:tensorflow:From /content/modeling.py:93: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.
W0629 15:14:32.336830 140241404368768 deprecation_wrapper.py:119] From /content/modeling.py:93: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.
Traceback (most recent call last):
File "run_classifier.py", line 981, in <module>
tf.app.run()
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/platform/app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "/usr/local/lib/python3.7/dist-packages/absl/app.py", line 303, in run
_run_main(main, args)
File "/usr/local/lib/python3.7/dist-packages/absl/app.py", line 251, in _run_main
sys.exit(main(argv))
File "run_classifier.py", line 800, in main
bert_config = modeling.BertConfig.from_json_file(FLAGS.bert_config_file)
File "/content/modeling.py", line 94, in from_json_file
text = reader.read()
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/lib/io/file_io.py", line 122, in read
self._preread_check()
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/lib/io/file_io.py", line 84, in _preread_check
compat.as_bytes(self.__name), 1024 * 512)
tensorflow.python.framework.errors_impl.NotFoundError: /bert_config.json; No such file or directory](url)
|
open
|
2021-06-29T15:27:53Z
|
2021-06-29T15:28:29Z
|
https://github.com/google-research/bert/issues/1241
|
[] |
aoyuzhang
| 0
|
deezer/spleeter
|
deep-learning
| 715
|
[Discussion] Can't seem to get the GPU working for training; maybe I'm to early in the pipeline to see it?
|
I've been trying for the past week to get GPU acceleration working and I don't think I'm getting it. I've tried a dedicated VM, the container, version 2.3.0 & 2.2.2.. I've lost count of all the different cuda, tensorflow configurations I've tried.. They all work when I test tensorflow directly for GPU access (I test pyTorch as well) and it seems to have acess.. I thought I got it to work once, I saw GPU memory go up to 20GB (half of the 40GB that was available) but after 6 hours the most I ever saw was the GPU spike up to 15% for 1 sec. Next time I tried the same job on a different config the memory never went above 300MB and GPU usage never went above 0%. When I run nvidia-smi I see the python job listed.
Now I'm not sure if this is because I haven't figured out how to get the GPU setup properly or if there is a very long and costly loading phase (I have about 450k wavs). I see the wavs being loaded in the STDOUT logging, they are getting cached to disk.. But should I be using GPU during this phase or do all the files need to be converted before training really starts?
Can someone please provide a brief explanation of what to expect when the training is working properly.. I can't tell if I'm doing something wrong or not..
|
open
|
2022-01-24T14:45:14Z
|
2022-01-30T20:36:45Z
|
https://github.com/deezer/spleeter/issues/715
|
[
"question"
] |
dustyny
| 3
|
apify/crawlee-python
|
web-scraping
| 479
|
Create a new guide for result storages (`Dataset`, `KeyValueStore`)
|
- We should create a new documentation guide on how to work with result storages (`Dataset`, `KeyValueStore`).
- Inspiration: https://crawlee.dev/docs/guides/result-storage
- Check the structure of other guides - [docs/guides](https://github.com/apify/crawlee-python/tree/master/docs/guides), and try to render it using `make run-doc`.
|
closed
|
2024-08-30T12:02:34Z
|
2024-11-05T14:45:51Z
|
https://github.com/apify/crawlee-python/issues/479
|
[
"documentation",
"t-tooling",
"hacktoberfest"
] |
vdusek
| 9
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.