repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
piskvorky/gensim
|
data-science
| 3,541
|
Installation with Poetry fails on Python 3.12 - `error: ‘PyArray_Descr’ {aka ‘struct _PyArray_Descr’} has no member named ‘subarray’`
|
#### Problem description
Installation of Gensim version 4.3.2 or current develop branch fails on Python 3.12 when using Poetry - when using pip there is no problem.
Noted this first from a [failed run of our CI/CD pipeline](https://github.com/NatLibFi/Annif/actions/runs/9740136805/job/26882977015), which installs v4.3.2. Two weeks ago it [ran successfully](https://github.com/NatLibFi/Annif/actions/runs/9562429822).
#### Steps to reproduce
##### Success with pip
Run
docker run python:3.12 pip install gensim # for v4.3.2
or
docker run python:3.12 pip install git+https://github.com/piskvorky/gensim.git@develop
The dependencies for the develop branch by pip are:
```
gensim 4.3.2.dev0
numpy 1.26.4
scipy 1.14.0
smart-open 7.0.4
wrapt 1.16.0
```
##### Fail with Poetry
Using this `Dockerfile`
```Dockerfile
FROM python:3.12
RUN pip install --upgrade pip poetry --no-cache-dir
COPY pyproject.toml pyproject.toml
RUN poetry install
```
and this `pyproject.toml`
```toml
[tool.poetry]
name = "gensim-testing"
version = "0.1.0"
description = ""
authors = ["Your Name <you@example.com>"]
[tool.poetry.dependencies]
python = "^3.12"
# gensim = "4.3.2"
gensim = { git = "https://github.com/piskvorky/gensim.git", branch = "develop" }
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
```
to build and install current develop branch (or v4.3.2) with the command
docker build -t gensim-build:3.12 .
the result is a failure at the following build step:
```
running build_ext
building 'gensim.models.word2vec_inner' extension
creating build/temp.linux-x86_64-cpython-312
creating build/temp.linux-x86_64-cpython-312/gensim
creating build/temp.linux-x86_64-cpython-312/gensim/models
gcc -fno-strict-overflow -Wsign-compare -DNDEBUG -g -O3 -Wall -fPIC -I/tmp/tmp2ohmylfq/.venv/include -I/usr/local/include/python3.12 -I/tmp/tmp2ohmylfq/.venv/lib/python3.12/site-packages/numpy/_core/include -c gensim/models/word2vec_inner.c -o build/temp.linux-x86_64-cpython-312/gensim/models/word2vec_inner.o
In file included from /tmp/tmp2ohmylfq/.venv/lib/python3.12/site-packages/numpy/_core/include/numpy/ndarraytypes.h:1909,
from /tmp/tmp2ohmylfq/.venv/lib/python3.12/site-packages/numpy/_core/include/numpy/ndarrayobject.h:12,
from /tmp/tmp2ohmylfq/.venv/lib/python3.12/site-packages/numpy/_core/include/numpy/arrayobject.h:5,
from gensim/models/word2vec_inner.c:771:
/tmp/tmp2ohmylfq/.venv/lib/python3.12/site-packages/numpy/_core/include/numpy/npy_1_7_deprecated_api.h:17:2: warning: #warning "Using deprecated NumPy API, disable it with " "#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp]
17 | #warning "Using deprecated NumPy API, disable it with " \
| ^~~~~~~
In file included from /usr/local/include/python3.12/Python.h:38,
from gensim/models/word2vec_inner.c:25:
gensim/models/word2vec_inner.c: In function ‘__pyx_f_5numpy_PyDataType_SHAPE’:
gensim/models/word2vec_inner.c:9365:39: error: ‘PyArray_Descr’ {aka ‘struct _PyArray_Descr’} has no member named ‘subarray’
9365 | __Pyx_INCREF(((PyObject*)__pyx_v_d->subarray->shape));
| ^~
/usr/local/include/python3.12/pyport.h:24:38: note: in definition of macro ‘_Py_CAST’
24 | #define _Py_CAST(type, expr) ((type)(expr))
| ^~~~
/usr/local/include/python3.12/object.h:661:35: note: in expansion of macro ‘_PyObject_CAST’
661 | # define Py_INCREF(op) Py_INCREF(_PyObject_CAST(op))
| ^~~~~~~~~~~~~~
gensim/models/word2vec_inner.c:1435:27: note: in expansion of macro ‘Py_INCREF’
1435 | #define __Pyx_INCREF(r) Py_INCREF(r)
| ^~~~~~~~~
gensim/models/word2vec_inner.c:9365:5: note: in expansion of macro ‘__Pyx_INCREF’
9365 | __Pyx_INCREF(((PyObject*)__pyx_v_d->subarray->shape));
| ^~~~~~~~~~~~
gensim/models/word2vec_inner.c:9366:36: error: ‘PyArray_Descr’ {aka ‘struct _PyArray_Descr’} has no member named ‘subarray’
9366 | __pyx_r = ((PyObject*)__pyx_v_d->subarray->shape);
| ^~
error: command '/usr/bin/gcc' failed with exit code 1
```
(I thought this was something to do with the Numpy 2.0 release, but the current develop branch has already pinned Numpy to <2.0. So it seems something else than Numpy is causing this.)
The dependencies by Poetry resolution are the same as by pip.
When using Python 3.11 the installation is successful with Poetry, and the dependencies are the same.
#### Versions
Python 3.12.4
Poetry 1.8.3
|
closed
|
2024-07-02T19:18:03Z
|
2024-07-18T12:10:32Z
|
https://github.com/piskvorky/gensim/issues/3541
|
[] |
juhoinkinen
| 5
|
piskvorky/gensim
|
machine-learning
| 3,539
|
gensim on Windows arm64 system probably downloads unsupported NumPy version during building from source
|
#### Problem description
Looking at the meta package [oldest-supported-numpy](https://github.com/scipy/oldest-supported-numpy/blob/main/setup.cfg) I see that numpy<1.18.5 will be downloaded for python 3.8 and Windows on arm64 (apart from all non-arm64 and non-aarch64 platforms).
I believe then that the check specified in _pyproject.toml_ file will download an invalid (and unsupported by gensim) version of the NumPy library for the Windows on arm64 users:
```python
# oldest supported Numpy for this platform is 1.17 but the oldest supported by Gensim
# is 1.18.5, remove the line when they increase oldest supported Numpy for this platform
"numpy==1.18.5; python_version=='3.8' and platform_machine not in 'arm64|aarch64'",
```
#### Steps/code/corpus to reproduce
As the issue is a side-effect of a discussion from a different PR (https://github.com/piskvorky/gensim/pull/3538#discussion_r1641987802) I do not have a minimal reproducible example.
I do not have direct access to Windows on arm64 so I'll try virtualising the architecture and building _gensim_ there. Once I'm done I'll upload the output.
#### Versions
Python 3.8
Windows 10
arm64 architecture
<!--
Please provide the output of:
```python
import platform; print(platform.platform())
import sys; print("Python", sys.version)
import struct; print("Bits", 8 * struct.calcsize("P"))
import numpy; print("NumPy", numpy.__version__)
import scipy; print("SciPy", scipy.__version__)
import gensim; print("gensim", gensim.__version__)
from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION)
```
-->
|
open
|
2024-06-27T11:55:40Z
|
2025-02-07T13:42:55Z
|
https://github.com/piskvorky/gensim/issues/3539
|
[] |
filip-komarzyniec
| 1
|
localstack/localstack
|
python
| 12,263
|
feature request: Support for Snowflake REST API
|
### Is there an existing issue for this?
- [x] I have searched the existing issues
### Feature description
As part of managing all of our infrastructure as code, we have developed a framework for managing our Snowflake resources. It uses the python apis which under the covers utilize the REST API.
This was one of the main use-cases I was hoping to use Localstack's Snowflake service for so that we didn't have to stand up and tear down infrastructure during development inside a real Snowflake account?
Based on this [reply](https://localstack-community.slack.com/archives/CMAFN2KSP/p1736246889452909?thread_ts=1735842618.178209&cid=CMAFN2KSP) to my original query in Slack, it sounds like this is not currently supported but might be added to the roadmap.
As it stands, this severely curtails the usefulness of localstack for our team.
### 🧑💻 Implementation
_No response_
### Anything else?
_No response_
|
open
|
2025-02-13T14:48:13Z
|
2025-02-17T14:46:55Z
|
https://github.com/localstack/localstack/issues/12263
|
[
"type: feature",
"status: backlog"
] |
noah-goodrich
| 4
|
vitalik/django-ninja
|
rest-api
| 599
|
__modify_schema__() missing 1 required positional argument: 'field'
|
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ninja/openapi/views.py", line 35, in openapi_json
schema = api.get_openapi_schema()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ninja/main.py", line 419, in get_openapi_schema
return get_schema(api=self, path_prefix=path_prefix)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ninja/openapi/schema.py", line 40, in get_schema
openapi = OpenAPISchema(api, path_prefix)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ninja/openapi/schema.py", line 62, in __init__
("paths", self.get_paths()),
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ninja/openapi/schema.py", line 77, in get_paths
path_methods = self.methods(path_view.operations)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ninja/openapi/schema.py", line 86, in methods
operation_details = self.operation_details(op)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ninja/openapi/schema.py", line 114, in operation_details
body = self.request_body(operation)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ninja/openapi/schema.py", line 228, in request_body
model, remove_level=model._param_source == "body"
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ninja/openapi/schema.py", line 186, in _create_schema_from_model
cast(Type[BaseModel], model), ref_prefix=REF_PREFIX, by_alias=by_alias
File "pydantic/schema.py", line 167, in pydantic.schema.model_schema
File "pydantic/schema.py", line 548, in pydantic.schema.model_process_schema
File "pydantic/schema.py", line 589, in pydantic.schema.model_type_schema
File "pydantic/schema.py", line 236, in pydantic.schema.field_schema
File "pydantic/schema.py", line 303, in pydantic.schema.get_field_schema_validations
TypeError: __modify_schema__() missing 1 required positional argument: 'field'
[28/Oct/2022 10:11:46] "GET /api/openapi.json HTTP/1.1" 500 254519
[28/Oct/2022 10:11:46] "GET /api/docs HTTP/1.1" 200 788
Internal Server Error: /api/openapi.json
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ninja/openapi/views.py", line 35, in openapi_json
schema = api.get_openapi_schema()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ninja/main.py", line 419, in get_openapi_schema
return get_schema(api=self, path_prefix=path_prefix)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ninja/openapi/schema.py", line 40, in get_schema
openapi = OpenAPISchema(api, path_prefix)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ninja/openapi/schema.py", line 62, in __init__
("paths", self.get_paths()),
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ninja/openapi/schema.py", line 77, in get_paths
path_methods = self.methods(path_view.operations)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ninja/openapi/schema.py", line 86, in methods
operation_details = self.operation_details(op)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ninja/openapi/schema.py", line 114, in operation_details
body = self.request_body(operation)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ninja/openapi/schema.py", line 228, in request_body
model, remove_level=model._param_source == "body"
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ninja/openapi/schema.py", line 186, in _create_schema_from_model
cast(Type[BaseModel], model), ref_prefix=REF_PREFIX, by_alias=by_alias
File "pydantic/schema.py", line 167, in pydantic.schema.model_schema
File "pydantic/schema.py", line 548, in pydantic.schema.model_process_schema
File "pydantic/schema.py", line 589, in pydantic.schema.model_type_schema
File "pydantic/schema.py", line 236, in pydantic.schema.field_schema
File "pydantic/schema.py", line 303, in pydantic.schema.get_field_schema_validations
TypeError: __modify_schema__() missing 1 required positional argument: 'field'
[28/Oct/2022 10:11:47] "GET /api/openapi.json HTTP/1.1" 500 254519
[28/Oct/2022 10:24:17] "GET /api/docs HTTP/1.1" 200 788
Internal Server Error: /api/openapi.json
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ninja/openapi/views.py", line 35, in openapi_json
schema = api.get_openapi_schema()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ninja/main.py", line 419, in get_openapi_schema
return get_schema(api=self, path_prefix=path_prefix)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ninja/openapi/schema.py", line 40, in get_schema
openapi = OpenAPISchema(api, path_prefix)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ninja/openapi/schema.py", line 62, in __init__
("paths", self.get_paths()),
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ninja/openapi/schema.py", line 77, in get_paths
path_methods = self.methods(path_view.operations)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ninja/openapi/schema.py", line 86, in methods
operation_details = self.operation_details(op)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ninja/openapi/schema.py", line 114, in operation_details
body = self.request_body(operation)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ninja/openapi/schema.py", line 228, in request_body
model, remove_level=model._param_source == "body"
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ninja/openapi/schema.py", line 186, in _create_schema_from_model
cast(Type[BaseModel], model), ref_prefix=REF_PREFIX, by_alias=by_alias
File "pydantic/schema.py", line 167, in pydantic.schema.model_schema
File "pydantic/schema.py", line 548, in pydantic.schema.model_process_schema
File "pydantic/schema.py", line 589, in pydantic.schema.model_type_schema
File "pydantic/schema.py", line 236, in pydantic.schema.field_schema
File "pydantic/schema.py", line 303, in pydantic.schema.get_field_schema_validations
TypeError: __modify_schema__() missing 1 required positional argument: 'field'
[28/Oct/2022 10:24:17] "GET /api/openapi.json HTTP/1.1" 500 254519
|
closed
|
2022-10-28T02:42:06Z
|
2022-12-13T10:56:28Z
|
https://github.com/vitalik/django-ninja/issues/599
|
[] |
SaluteGF
| 10
|
Morizeyao/GPT2-Chinese
|
nlp
| 37
|
where can download the trained model
|
closed
|
2019-08-26T11:30:56Z
|
2019-08-27T13:31:37Z
|
https://github.com/Morizeyao/GPT2-Chinese/issues/37
|
[] |
molsheim
| 1
|
|
sqlalchemy/alembic
|
sqlalchemy
| 1,117
|
Columns and tables comments with MSSQL
|
**Describe the bug**
When using the **operations.alter_column** to include a comment in a column of a MSSQL table it gets an error. Is it possible with alembic to generate columns and table descriptions with MSSQL using operations or Batchoperations?
**Expected behavior**
It's expected to update the table in the database and the columns description.
**To Reproduce**
```py
conn = engine.connect()
ctx = MigrationContext.configure(conn)
op = Operations(ctx)
**op.alter_column**("table_name", "column_name", schema="schema_name", comment="Test Comment")
```
**Error**
```
AttributeError Traceback (most recent call last)
/usr/local/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py in _compiler_dispatch(self, visitor, **kw)
76 try:
---> 77 meth = getter(visitor)
78 except AttributeError as err:
AttributeError: 'MSDDLCompiler' object has no attribute 'visit_clause'
The above exception was the direct cause of the following exception:
UnsupportedCompilationError Traceback (most recent call last)
/usr/local/lib/python3.7/site-packages/sqlalchemy/ext/compiler.py in _wrap_existing_dispatch(element, compiler, **kw)
529 try:
--> 530 return existing_dispatch(element, compiler, **kw)
531 except exc.UnsupportedCompilationError as uce:
/usr/local/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py in _compiler_dispatch(self, visitor, **kw)
78 except AttributeError as err:
---> 79 return visitor.visit_unsupported_compilation(self, err, **kw)
80
/usr/local/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py in visit_unsupported_compilation(self, element, err)
483 exc.UnsupportedCompilationError(self, type(element)),
--> 484 replace_context=err,
485 )
/usr/local/lib/python3.7/site-packages/sqlalchemy/util/compat.py in raise_(***failed resolving arguments***)
207 try:
--> 208 raise exception
209 finally:
UnsupportedCompilationError: Compiler <sqlalchemy.dialects.mssql.base.MSDDLCompiler object at 0x7f13e0ec57d0> can't render element of type <class 'alembic.ddl.base.ColumnComment'> (Background on this error at: https://sqlalche.me/e/14/l7de)
```
**Versions.**
- OS: Linux
- Python: 3.9.5
- Alembic: 1.7.6
- SQLAlchemy: 1.4.43
- Database: Microsoft SQL Server 2016
**Additional context**
Is it possible with alembic to generate columns and table descriptions with MSSQL using operations or Batchoperations?
**Have a nice day!**
|
closed
|
2022-11-11T17:52:31Z
|
2022-11-11T19:10:24Z
|
https://github.com/sqlalchemy/alembic/issues/1117
|
[
"use case"
] |
edulauer
| 1
|
huggingface/datasets
|
pandas
| 7,142
|
Specifying datatype when adding a column to a dataset.
|
### Feature request
There should be a way to specify the datatype of a column in `datasets.add_column()`.
### Motivation
To specify a custom datatype, we have to use `datasets.add_column()` followed by `datasets.cast_column()` which is slow for large datasets. Another workaround is to pass a `numpy.array()` of desired type to the `datasets.add_column()` function.
IMO this functionality should be natively supported.
https://discuss.huggingface.co/t/add-column-with-a-particular-type-in-datasets/95674
### Your contribution
I can submit a PR for this.
|
closed
|
2024-09-08T07:34:24Z
|
2024-09-17T03:46:32Z
|
https://github.com/huggingface/datasets/issues/7142
|
[
"enhancement"
] |
varadhbhatnagar
| 1
|
ultrafunkamsterdam/undetected-chromedriver
|
automation
| 1,830
|
nodriver detect browser closed
|
Any way to detect when the browser is manually closed by a user?
I have a
```
while True:
time.sleep(5)
if browser.stopped:
return
```
but browser.stopped is always false, even if the browser is closed. I've looked all through the code, and cannot see anything to detect when chrome is closed by the user.
anyone with any ideas?
|
open
|
2024-04-17T23:00:00Z
|
2025-03-19T12:00:44Z
|
https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1830
|
[] |
bluemangofunk
| 2
|
vi3k6i5/flashtext
|
nlp
| 22
|
How can I get the positions of keywords?
|
Hi, the function "extract_keywords" only returns the keywords found in the sentence, however, sometimes we also care for the positions of keywords, so is there any function can achieve this goal?
|
closed
|
2017-11-17T07:32:11Z
|
2017-11-21T19:04:34Z
|
https://github.com/vi3k6i5/flashtext/issues/22
|
[
"duplicate"
] |
liu-nlper
| 2
|
docarray/docarray
|
pydantic
| 928
|
chore: draft release note v0.20.1
|
# Release Note
[//]: <> (remove the phrase like "0 new features")
This release contains 2 bug fixes and 1 documentation improvement.
## 🐞 Bug Fixes
### Make Milvus DocumentArray thread safe and suitable for pytest (#904)
This bug was causing connectivity issues when using _multiple DocumentArrays in different threads to connect to the same Milvus instance_, e.g. in pytest.
This would produce an error like the following:
```bash
E1207 14:59:51.357528591 2279 fork_posix.cc:76] Other threads are currently calling into gRPC, skipping fork() handlers
E1207 14:59:51.367985469 2279 fork_posix.cc:76] Other threads are currently calling into gRPC, skipping fork() handlers
E1207 14:59:51.457061884 3934 ev_epoll1_linux.cc:824] assertion failed: gpr_atm_no_barrier_load(&g_active_poller) != (gpr_atm)worker
Fatal Python error: Aborted
```
This fix _creates a separate gRPC connection for each MilvusDocumentArray instance_, circumventing the issue.
### Restore backwards compatibility for (de)serialization (#903)
_DocArray v0.20.0 broke (de)serialization backwards compatibility with earlier versions_ of the library, making it impossible to load DocumentArrays from v0.19.1 or earlier from disk:
```python
# DocArray <= 0.19.1
da = DocumentArray([Document() for _ in range(10)])
da.save_binary('old-da.docarray')
# DocArray == 0.20.0
da = DocumentArray.load_binary('old-da.docarray')
da.extend([Document()])
print(da)
```
```bash
AttributeError: 'DocumentArrayInMemory' object has no attribute '_is_subindex'
```
_This fix restores backwards compatibility_ by not relying on newly introduced private attributes:
```python
# DocArray <= 0.19.1
da = DocumentArray([Document() for _ in range(10)])
da.save_binary('old-da.docarray')
# DocArray == 0.20.1
da = DocumentArray.load_binary('old-da.docarray')
da.extend([Document()])
print(da)
```
```bash
<DocumentArray (length=11) at 140683902276416>
Process finished with exit code 0
```
## 📗 Documentation Improvements
- Polish docs throughout (#895)
## 🤟 Contributors
We would like to thank all contributors to this release:
- Anne Yang (@AnneYang720)
- Nan Wang (@nan-wang)
- anna-charlotte (@anna-charlotte)
- Alex Cureton-Griffiths (@alexcg1)
|
closed
|
2022-12-12T09:00:01Z
|
2022-12-12T09:44:44Z
|
https://github.com/docarray/docarray/issues/928
|
[] |
alexcg1
| 0
|
dask/dask
|
scikit-learn
| 11,180
|
Cannot bind async delayed
|
**Describe the issue**:
[bind()](https://docs.dask.org/en/stable/graph_manipulation.html#dask.graph_manipulation.bind) does not work if the `children` parameter is a Delayed for an async function although it does work if such an argument is passed as `parents`.
**Minimal Complete Verifiable Example**:
```python
import asyncio
import time
import random
from dask import delayed
from dask.graph_manipulation import bind
from distributed import Client
def fsync():
n = random.random()
time.sleep(n)
print("sync done", n)
return n
async def fasync():
n = random.random()
await asyncio.sleep(n)
print("async done", n)
return n
if __name__ == "__main__":
a = delayed(fsync)()
b = delayed(fasync)()
a_after_b = bind(a, b)
b_after_a = bind(b, a)
with Client():
a_after_b.compute()
# raises TypeError: cannot pickle 'coroutine' object
b_after_a.compute()
```
|
open
|
2024-06-14T10:57:41Z
|
2024-06-14T10:57:52Z
|
https://github.com/dask/dask/issues/11180
|
[
"needs triage"
] |
gsakkis
| 0
|
amidaware/tacticalrmm
|
django
| 1,123
|
GPU load check
|
It would be very good to have a GPU load check, which can be configured as CPU load as well, having configurable thresholds for warning and error states.
Please do include this,
Thank you
|
open
|
2022-05-11T13:40:05Z
|
2023-10-02T22:21:26Z
|
https://github.com/amidaware/tacticalrmm/issues/1123
|
[
"enhancement"
] |
rmmpositivens
| 1
|
sqlalchemy/alembic
|
sqlalchemy
| 524
|
IF (NOT) EXISTS for ALTER TABLE sub-commands like constraints etc
|
Hello, forgive me if an issue already exists for this, I have seen a related one [here](https://github.com/sqlalchemy/alembic/issues/151) but that is several years old and status is unclear.
There doesn't seem to be an option to add IF (NOT) EXISTS or CASCADE flags to queries generated by e.g. `op.create_unique_constraint` or `op.drop_index`. Here are some raw queries I've added and directly executed lately:
```sql
DROP INDEX ix_host_ip
CASCADE;
```
or
```sql
ALTER TABLE host
DROP CONSTRAINT IF EXISTS vuln_occurrence_unique_constraint
CASCADE;
```
I would like to be able to create these with alembic, for example:
```py
op.drop_index('ix_host_ip', table_name='host', cascade=True)
op.drop_constraint_if_exists('vuln_occurrence_host_ip_fkey', 'vuln_occurrence', type_='foreignkey', cascade=True)
```
(The syntax could be different to better fit the style of existing commands, if needed.)
I haven't contributed yet but I might be able to work on this issue if it is deemed a worthwhile change to the project.
|
open
|
2019-01-09T22:14:13Z
|
2025-03-19T08:16:59Z
|
https://github.com/sqlalchemy/alembic/issues/524
|
[
"feature",
"op directives",
"external SQLAlchemy issues"
] |
EMCain
| 7
|
comfyanonymous/ComfyUI
|
pytorch
| 6,433
|
Connecting Lines Disappeared
|
### Expected Behavior
I expect the lines to be there.
### Actual Behavior

I started ComfyUI again after no update or anything and the lines disappeared.
### Steps to Reproduce
I tried a new work flow. I also tried another web browser and did various activities to try to search for an answer.
### Debug Logs
```powershell
[START] Security scan
[DONE] Security scan
## ComfyUI-Manager: installing dependencies done.
** ComfyUI startup time: 2025-01-11 13:21:28.643
** Platform: Windows
** Python version: 3.12.7 (tags/v3.12.7:0b05ead, Oct 1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)]
** Python executable: C:\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\python.exe
** ComfyUI Path: C:\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI
** User directory: C:\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\user
** ComfyUI-Manager config path: C:\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\user\default\ComfyUI-Manager\config.ini
** Log path: C:\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\user\comfyui.log
#######################################################################
[ComfyUI-Manager] Starting dependency installation/(de)activation for the extension
[SKIP] Downgrading pip package isn't allowed: transformers (cur=4.47.1)
[SKIP] Downgrading pip package isn't allowed: tokenizers (cur=0.21.0)
[SKIP] Downgrading pip package isn't allowed: safetensors (cur=0.4.5)
## ComfyUI-Manager: EXECUTE => ['C:\\AI\\ComfyUI_windows_portable_nvidia\\ComfyUI_windows_portable\\python_embeded\\python.exe', '-m', 'pip', 'install', 'pyyaml']
## Execute install/(de)activation script for 'C:\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI'
[SKIP] Downgrading pip package isn't allowed: kornia (cur=0.7.4)
Install: pip packages for 'C:\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-manager'
[SKIP] Downgrading pip package isn't allowed: huggingface-hub (cur=0.27.0)
[ComfyUI-Manager] Startup script completed.
#######################################################################
Prestartup times for custom nodes:
5.9 seconds: C:\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-manager
Total VRAM 12282 MB, total RAM 32490 MB
pytorch version: 2.5.1+cu124
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4070 : cudaMallocAsync
Using pytorch attention
[Prompt Server] web root: C:\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\web
### Loading: ComfyUI-Manager (V3.6.5)
### ComfyUI Version: v0.3.10-49-g42086af1 | Released on '2025-01-11'
Import times for custom nodes:
0.0 seconds: C:\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py
0.3 seconds: C:\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-manager
Starting server
To see the GUI go to: http://127.0.0.1:8188
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
FETCH DATA from: C:\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\user\default\ComfyUI-Manager\cache\2233941102_nodes_page_1_limit_1000.json [DONE]
nightly_channel: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/cache
FETCH DATA from: C:\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\user\default\ComfyUI-Manager\cache\1514988643_custom-node-list.json [DONE]
FETCH DATA from: C:\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-manager\extension-node-map.json [DONE]
FETCH DATA from: C:\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\user\default\ComfyUI-Manager\cache\1742899825_extension-node-map.json [DONE]
FETCH DATA from: C:\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\user\default\ComfyUI-Manager\cache\2233941102_nodes_page_1_limit_1000.json [DONE]
nightly_channel: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/cache
FETCH DATA from: C:\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\user\default\ComfyUI-Manager\cache\1514988643_custom-node-list.json [DONE]
FETCH DATA from: C:\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\user\default\ComfyUI-Manager\cache\746607195_github-stats.json [DONE]
FETCH DATA from: C:\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\user\default\ComfyUI-Manager\cache\832903789_extras.json [DONE]
```
### Other
I went on Reddit and other platforms and could not find a similar issue.
|
closed
|
2025-01-11T18:43:52Z
|
2025-02-10T21:28:39Z
|
https://github.com/comfyanonymous/ComfyUI/issues/6433
|
[
"Potential Bug"
] |
RiversKosmos
| 2
|
adbar/trafilatura
|
web-scraping
| 621
|
trafilatura.fetch_url Timeout is set but does not work
|
Hello author, I'm in settings. Timeout is set in settings.cfg, but it doesn't work when I use it in asynchronous programs, and I want to know why. Version 1.9.0
`html = trafilatura.fetch_url(url, no_ssl=True, config=config)`
```# Download
DOWNLOAD_TIMEOUT = 3
MAX_FILE_SIZE = 20000000
MIN_FILE_SIZE = 10
# sleep between requests
SLEEP_TIME = 0.1
# user-agents here: agent1,agent2,...
USER_AGENTS =
# cookie for HTTP requests
COOKIE =
# Maximum number of redirects that we will follow
MAX_REDIRECTS = 2```
The program error message is as follows, and it can be seen that the log information takes a long time
```2024-06-15 14:26:26 autodl-container-36fb11a7ae-4b387aa7 urllib3.connectionpool[214602] WARNING Retrying (Retry(total=1, connect=0, read=None, redirect=2, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='time.tianqi.com', port=443): Read timed out.")': /china_shijian/
2024-06-15 14:26:44 autodl-container-36fb11a7ae-4b387aa7 urllib3.connectionpool[214602] WARNING Retrying (Retry(total=0, connect=0, read=None, redirect=2, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='time.tianqi.com', port=443): Read timed out.")': /china_shijian/
2024-06-15 14:26:55 autodl-container-36fb11a7ae-4b387aa7 trafilatura.downloads[214602] ERROR download error: https://time.tianqi.com/china_shijian/ HTTPSConnectionPool(host='time.tianqi.com', port=443): Max retries exceeded with url: /china_shijian/ (Caused by ReadTimeoutError("HTTPSConnectionPool(host='time.tianqi.com', port=443): Read timed out."))
2024-06-15 14:26:56 autodl-container-36fb11a7ae-4b387aa7 pyppeteer.launcher[214602] INFO Browser listening on: ws://127.0.0.1:59143/devtools/browser/1bf1a595-3eff-4cbe-aaa6-526d53d1fbce
2024-06-15 14:27:01 autodl-container-36fb11a7ae-4b387aa7 urllib3.connectionpool[214602] WARNING Retrying (Retry(total=1, connect=0, read=None, redirect=2, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='time.tianqi.com', port=443): Read timed out. (read timeout=3)")': /AoE/
2024-06-15 14:27:07 autodl-container-36fb11a7ae-4b387aa7 urllib3.connectionpool[214602] WARNING Retrying (Retry(total=0, connect=0, read=None, redirect=2, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='time.tianqi.com', port=443): Read timed out. (read timeout=3)")': /AoE/
2024-06-15 14:27:07 autodl-container-36fb11a7ae-4b387aa7 trafilatura.downloads[214602] ERROR not a 200 response: 404 for URL https://time.tianqi.com/AoE/
2024-06-15 14:27:09 autodl-container-36fb11a7ae-4b387aa7 pyppeteer.launcher[214602] INFO Browser listening on: ws://127.0.0.1:45475/devtools/browser/4714f9ce-078b-4ddf-94b9-ef16b68355ce```
|
closed
|
2024-06-15T06:55:10Z
|
2024-06-18T15:38:06Z
|
https://github.com/adbar/trafilatura/issues/621
|
[
"question"
] |
Storm0921
| 2
|
apache/airflow
|
machine-learning
| 47,845
|
Implement masking for task sdk logs
|
### Body
https://github.com/apache/airflow/issues/45438 ported over secrets masker to task sdk, but we should also start masking the task logs.
### Committer
- [x] I acknowledge that I am a maintainer/committer of the Apache Airflow project.
|
open
|
2025-03-17T07:36:21Z
|
2025-03-17T07:38:39Z
|
https://github.com/apache/airflow/issues/47845
|
[
"area:logging",
"area:secrets",
"kind:meta",
"area:task-sdk"
] |
amoghrajesh
| 0
|
scrapy/scrapy
|
web-scraping
| 6,681
|
Test async callbacks using Contracts
|
Is there a way to test an async callback using Scrapy contracts? Based on the error message, it seems like there's no support for that, as with scrapy-playwright most calls to the browser benefit from being async. It would be great to have this feature.
Any thoughts on good testing practices for scrapy-playwright spiders?
```
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/ehsan/projects/upwork/karim/maryland-scraper/.venv/lib/python3.12/site-packages/scrapy/contracts/__init__.py", line 183, in cb_wrapper
output = cb(response, **cb_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ehsan/projects/upwork/karim/maryland-scraper/.venv/lib/python3.12/site-packages/scrapy/contracts/__init__.py", line 70, in wrapper
raise TypeError("Contracts don't support async callbacks")
TypeError: Contracts don't support async callbacks
```
|
open
|
2025-02-18T02:10:52Z
|
2025-02-18T06:52:42Z
|
https://github.com/scrapy/scrapy/issues/6681
|
[
"enhancement",
"contracts",
"asyncio"
] |
Ehsan-U
| 1
|
pallets-eco/flask-wtf
|
flask
| 441
|
update maintainer permissions
|
@lepture
Can you update my permissions in PyPI to owner instead of maintainer, so I can add other maintainers? @azmeuk is a maintainer of WTForms now and has been putting in a bunch of work here too.
Also, I need maintainer or owner permissions on the repo so I can fix the Read the Docs config, something got messed up with the webhook and it hasn't been building for four years.
We're on Discord still, if you want to chat there. https://discord.gg/pallets
|
closed
|
2021-05-24T17:35:04Z
|
2021-06-09T00:40:18Z
|
https://github.com/pallets-eco/flask-wtf/issues/441
|
[] |
davidism
| 1
|
tflearn/tflearn
|
data-science
| 186
|
Feed a pre-trained embedding using a CSV file
|
I have a pre-trained csv file which contains word embeddings for each word in the vocab.
For example, word vector of dimension 3, vocab size, say 150:
food 0.4 -0.2 0.04
is 0.97 1.23 -2.3
caught 1.45 -2.34 0.23
...
I wish to use a LSTM framework in which I need to feed the word vectors from the csv file at the input layer and do not wish to re-train the word embeddings (as in EMBEDDING layer given in example at: https://github.com/tflearn/tflearn/blob/master/examples/nlp/dynamic_lstm.py). I did a lot of search and could not find a simple way to achieve this case. Can someone please guide for this issue?
|
closed
|
2016-07-07T17:44:41Z
|
2016-07-30T09:42:35Z
|
https://github.com/tflearn/tflearn/issues/186
|
[] |
krayush07
| 6
|
MaartenGr/BERTopic
|
nlp
| 1,150
|
BERTopic for sentence similarity
|
Hi,
I'm new to NLP,
I wanted to ask: is there a way to determain similarity between a query and each sentence in a set of sentences?
I've trained the model on my set of sentences
|
closed
|
2023-04-03T11:02:00Z
|
2023-05-23T09:23:46Z
|
https://github.com/MaartenGr/BERTopic/issues/1150
|
[] |
leimish
| 2
|
charlesq34/pointnet
|
tensorflow
| 155
|
Dataset format, object, and data type
|
Hi, Charles.
Is your dataset based on each frame generated by Lidar? Do you segment each object on a frame & give the object label?
Besides, if I used my own generated lidar frame to test it, which part of your code that I need to modify?
|
open
|
2018-12-16T14:21:44Z
|
2018-12-16T14:22:57Z
|
https://github.com/charlesq34/pointnet/issues/155
|
[] |
liemwellys
| 0
|
tensorflow/tensor2tensor
|
machine-learning
| 1,797
|
Dimension mismatch when running img2img transformer on PREDICT mode
|
### Description
I've trained a model on the `img2img_celeba` problem using the `img2img_transformer` and my hyper params are `img2img_transformer2d_base` and I'm trying to do inference :
```
import tensorflow as tf
from tensor2tensor import problems
from tensor2tensor.utils import registry, trainer_lib
import tensor2tensor.models.image_transformer_2d as image_transformer_2d
problem_name = 'img2img_celeba'
model_name = 'img2img_transformer'
hparams_name = 'img2img_transformer2d_base'
data_dir = 'DATA DIR'
tmp_dir = 'TMP DIR'
train_dir = 'TRAIN DIR'
mode = tf.estimator.ModeKeys.PREDICT
my_problem = problems.problem(problem_name)
hparams = trainer_lib.create_hparams(hparams_name, data_dir=data_dir, problem_name=problem_name)
model = registry.model(model_name)(hparams, mode)
ds = my_problem.dataset(mode, data_dir).repeat(None).batch(256)
input_tensors = ds.make_one_shot_iterator().get_next()
# Just to make sure actual targets are not passed for teacher forcing
del input_tensors['targets']
model_output = model.infer(input_tensors)
ckpt_path = tf.train.latest_checkpoint(train_dir)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, ckpt_path)
my_results = sess.run([model_output, input_tensors])
```
And I run into the following error:
```
tensor2tensor/utils/t2t_model.py:325 call *
sharded_logits, losses = self.model_fn_sharded(sharded_features)
tensor2tensor/utils/t2t_model.py:402 model_fn_sharded *
sharded_logits, sharded_losses = dp(self.model_fn, datashard_to_features)
tensor2tensor/utils/expert_utils.py:231 __call__ *
outputs.append(fns[i](*my_args[i], **my_kwargs[i]))
tensor2tensor/utils/t2t_model.py:421 model_fn *
transformed_features = self.bottom(features)
tensor2tensor/utils/t2t_model.py:505 bottom *
transformed_features[feature_name] = bottom(features[feature_name],
tensor2tensor/layers/modalities.py:362 targets_bottom *
return bottom(x, model_hparams, vocab_size)
tensor2tensor/layers/modalities.py:353 image_channel_embeddings_bottom *
target_embeddings = cia.get_channel_embeddings(
tensor2tensor/layers/common_image_attention.py:679 get_channel_embeddings *
targets_split = tf.split(targets, io_depth, axis=3)
tensorflow_core/python/ops/array_ops.py:1684 split
axis=axis, num_split=num_or_size_splits, value=value, name=name)
tensorflow_core/python/ops/gen_array_ops.py:9898 split
"Split", split_dim=axis, value=value, num_split=num_split, name=name)
tensorflow_core/python/framework/op_def_library.py:794 _apply_op_helper
op_def=op_def)
tensorflow_core/python/util/deprecation.py:507 new_func
return func(*args, **kwargs)
tensorflow_core/python/framework/ops.py:3357 create_op
attrs, op_def, compute_device)
tensorflow_core/python/framework/ops.py:3426 _create_op_internal
op_def=op_def)
tensorflow_core/python/framework/ops.py:1770 __init__
control_input_ops)
tensorflow_core/python/framework/ops.py:1610 _create_c_op
raise ValueError(str(e))
ValueError: Dimension size must be evenly divisible by 3 but is 1
Number of ways to split should evenly divide the split dimension for 'while/img2img_transformer/parallel_0_4/img2img_transformer/img2img_transformer/identity_modality_1/targets_botto
m/split' (op: 'Split') with input shapes: [], [?,?,?,1] and with computed input tensors: input[0] = <3>.
```
The problem is that `image_channel_embeddings_bottom` is being applied on the targets tensor (It is set in the `image_transformer2d_base` hparams as the bottom transformation for targets) and expects the last dimension to be channels (3). However in the `infer` function a tensor is being created where the last dimension is of size 1:
```
if self._target_modality_is_real:
dim = self._problem_hparams.vocab_size["targets"]
if dim is not None and hasattr(self._hparams, "vocab_divisor"):
dim += (-dim) % self._hparams.vocab_divisor
recent_output.set_shape([None, None, None, dim])
else:
recent_output.set_shape([None, None, None, 1])
```
For this problem `self._target_modality_is_real` is False and it gets into the `else` code and the last dimension will have size 1 and thus the error happens. Is this a bug? What should i do to make it work? I have possible workarounds in my mind but don't know if they are semantically valid
...
### Environment information
```
OS: Ubuntu 16.04
$ pip freeze | grep tensor
mesh-tensorflow==0.1.12
tensor2tensor==1.15.0
tensorboard==1.15.0
tensorflow-datasets==1.3.0
tensorflow-estimator==1.15.1
tensorflow-gan==2.0.0
tensorflow-gpu==1.15.0
tensorflow-hub==0.7.0
tensorflow-metadata==0.21.1
tensorflow-probability==0.7.0
$ python -V
Python 3.6.8
```
|
open
|
2020-03-10T05:27:00Z
|
2020-03-10T05:28:57Z
|
https://github.com/tensorflow/tensor2tensor/issues/1797
|
[] |
py4
| 0
|
PokemonGoF/PokemonGo-Bot
|
automation
| 5,965
|
Server busy or offline, reconnecting in xxx seconds
|
Getting 'Server busy error' after login procedure. Facing this all the time. I am attaching config.json
2017-03-15 22:52:59,731 [ cli] [INFO] PokemonGO Bot v1.0
2017-03-15 22:52:59,748 [ cli] [INFO] commit: 26117e6e
2017-03-15 22:52:59,765 [ cli] [INFO] Configuration initialized
2017-03-15 22:52:59,766 [pokemongo_bot.health_record.bot_event] [INFO] Health check is enabled. For more information:
2017-03-15 22:52:59,766 [pokemongo_bot.health_record.bot_event] [INFO] https://github.com/PokemonGoF/PokemonGo-Bot/tree/dev#analytics
2017-03-15 22:52:59,792 [requests.packages.urllib3.connectionpool] [INFO] Starting new HTTP connection (1): www.google-analytics.com
(2713) wsgi starting up on http://127.0.0.1:4000
[2017-03-15 22:53:00] [PokemonGoBot] [INFO] Setting start location.
[2017-03-15 22:53:00] [PokemonGoBot] [INFO] [x] Coordinates found in passed in location, not geocoding.
[2017-03-15 22:53:00] [PokemonGoBot] [INFO] Location found: 40.766014, -73.977580 (40.766014, -73.97758, 8)
[2017-03-15 22:53:00] [PokemonGoBot] [INFO] Now at (40.766014, -73.97758, 8)
[2017-03-15 22:53:00] [PokemonGoBot] [INFO] Login procedure started.
[2017-03-15 22:53:35] [PokemonGoBot] [INFO] Server busy or offline, reconnecting in 814 seconds
[config.json.txt](https://github.com/PokemonGoF/PokemonGo-Bot/files/846206/config.json.txt)
|
closed
|
2017-03-15T23:01:42Z
|
2017-03-29T05:00:40Z
|
https://github.com/PokemonGoF/PokemonGo-Bot/issues/5965
|
[] |
roxane11
| 18
|
miguelgrinberg/python-socketio
|
asyncio
| 414
|
make SIGINT handling optional
|
Currently adding SIGINT handling makes it impossible to run the Client in a thread. Suggest move signal handling to optional (also in engineIO). This use case is running a different program that does many things in its main thread - and also uses socketIO client to handle some communications with a counterparty in a background thread.
|
closed
|
2020-01-20T07:53:16Z
|
2020-04-10T14:31:42Z
|
https://github.com/miguelgrinberg/python-socketio/issues/414
|
[
"question"
] |
prodipta
| 2
|
pyeve/eve
|
flask
| 818
|
Allow disabling entry point semantics for HATEOAS
|
This is to some degree a rehash of #473, but I’d like to raise a new point.
> HATEOAS links are always relative to the API entry point, so if your API home is at `examples.com/api/v1`, the `self` link in the above example would mean that the _people_ endpoint is located at `examples.com/api/v1/people`.
This greatly reduces the potential usefulness of HATEOAS, because it requires the client to have application-specific, out-of-band knowledge (the API entry point) in order to resolve the URLs.
This is becoming important due to the rise of standardized hypermedia formats like [HAL](http://stateless.co/hal_specification.html) (very similar to Eve’s default HATEOAS format). A client coded against HAL might be seeded with a start URL and then follow a chain of links to other resources, possibly located on other servers. Eventually it may navigate to a resource that is served by Eve—but it knows nothing about Eve or its entry points, all it knows is the request URL. For the client to keep going, the URLs served by Eve have to be either absolute (`http://...`) or relative to the request URL (e.g. `/api/v1/...`), but _not_ to the entry point.
Therefore, I propose an **optional** setting that would make Eve’s HATEOAS conform to this expectation.
For example, this setting could take a form of a “base entry point URL” against which the current URLs are to be resolved. If the user specifies `/api/v1/`, then a URL like `people?page=2` would be transformed to `/api/v1/people?page=2`. If the user specifies `http://example.com/_myLegacyApp/`, then a URL like `people?page=2` would be transformed to `http://example.com/_myLegacyApp/people?page=2`.
Since this setting would be optional, installations that don’t know / don’t care about their location in the URL space can keep running with the current relative-to-entry-point URLs, which are better than nothing.
Alternatively, for installations where the gateway (if any) preserves the “real” request URL (i.e. `Host` and request path), maybe Eve could look at that to derive the “real” HATEOAS URLs, again triggered with an optional setting.
If you agree with the overall idea, I can give it a try.
|
closed
|
2016-02-04T18:46:18Z
|
2017-01-03T09:53:06Z
|
https://github.com/pyeve/eve/issues/818
|
[
"enhancement"
] |
vfaronov
| 5
|
ivy-llc/ivy
|
numpy
| 27,985
|
Fix Frontend Failing Test: torch - linalg.torch.linalg.norm
|
Example of failed tests:
==================================================================================== 20.0% of 5 passed ====================================================================================================================================================================== short test summary info ==================================================================================
FAILED ivy_tests/test_ivy/test_frontends/test_torch/test_linalg.py::test_torch_norm[cpu-jax-False-False] - ivy.utils.exceptions.IvyValueError: jax: matrix_norm: Invalid axis values ((0, 1, 2)) for jnp.linalg.norm.
FAILED ivy_tests/test_ivy/test_frontends/test_torch/test_linalg.py::test_torch_norm[cpu-tensorflow-False-False] - ivy.utils.exceptions.IvyValueError: tensorflow: matrix_norm: 'axis' must be None, an integer, or a tuple of 2 unique integers, got (0, 1, 2)
FAILED ivy_tests/test_ivy/test_frontends/test_torch/test_linalg.py::test_torch_norm[cpu-torch-False-False] - ivy.utils.exceptions.IvyBackendException: torch: matrix_norm: linalg.matrix_norm: dim must be a 2-tuple. Got 0 1 2
FAILED ivy_tests/test_ivy/test_frontends/test_torch/test_linalg.py::test_torch_norm[cpu-paddle-False-False] - RuntimeError: linalg.norm: If dim is specified, it must be of length 1 or 2. Got [0, 1, 2]
Issue: torch.linalg.norm can take axis of length 1 or 2 (in addition to None). Test allows for axis of length upto 5.
|
closed
|
2024-01-22T04:11:13Z
|
2024-02-25T10:30:50Z
|
https://github.com/ivy-llc/ivy/issues/27985
|
[
"Sub Task"
] |
shruzki
| 0
|
axnsan12/drf-yasg
|
rest-api
| 872
|
Wrong Base URL: localhost
|
# Bug Report
## Description
the base url is evaluated as localhost
[ Base URL: localhost/v1 ]
## Is this a regression?
<!-- Did this behavior use to work in the previous version? -->
Can't say
## Minimal Reproduction
Use drf-yasg 1.27.x with django 4.1.12 and drf ^3.14
## Your Environment
```
schema_view = get_schema_view(
openapi.Info(
title = "my API",
default_version = 'v1',
description = 'my api description',
terms_of_service = "https://www.google.com/policies/terms/",
contact = openapi.Contact(email = 'user@example.com'),
license = openapi.License(name = 'BSD License'),
),
public=True,
permission_classes=(permissions.AllowAny,),
)
urlpatterns = [
...
path('docs/', schema_view.with_ui('swagger', cache_timeout=0),
name='schema-swagger-ui'),
...
]
#my_app/urls.py
SWAGGER_SETTINGS = {
"DEFAULT_API_URL": "https://mydomain.tld/v1"
}
```
|
open
|
2023-10-26T07:16:26Z
|
2025-03-07T12:09:08Z
|
https://github.com/axnsan12/drf-yasg/issues/872
|
[
"triage"
] |
navxio
| 2
|
kiwicom/pytest-recording
|
pytest
| 29
|
Provide extra config option for shared cassettes dir(s)
|
It could be a separate fixture that will return a list of paths where to look for cassettes. Currently, we need to use the full path to the cassette in `pytest.mark.vcr` which is tedious. However, shared cassettes could be used via a separate mark to avoid breaking the existing interface - `pytest.mark.vcr_shared("first.yaml", "second.yaml")`
|
open
|
2020-01-06T22:14:27Z
|
2020-01-06T22:14:27Z
|
https://github.com/kiwicom/pytest-recording/issues/29
|
[] |
Stranger6667
| 0
|
dpgaspar/Flask-AppBuilder
|
flask
| 1,563
|
Flask AppBuilder update from 2.1.5 and 3.1.1
|
### Environment
Flask-Appbuilder version: 3.1.1
pip freeze output:
```
alembic==1.5.4
aniso8601==8.1.1
apispec==3.3.2
attrs==20.3.0
Babel==2.9.0
beautifulsoup4==4.9.3
blinker==1.4
certifi==2020.12.5
cffi==1.14.4
chardet==4.0.0
click==7.1.2
colorama==0.4.4
cryptography==3.4.3
defusedxml==0.6.0
django-htmlmin==0.11.0
dnspython==2.1.0
elementpath==2.1.3
email-validator==1.1.2
Flask==1.1.2
Flask-AppBuilder==3.1.1
Flask-Babel==1.0.0
Flask-DebugToolbar==0.11.0
Flask-JWT-Extended==3.25.0
Flask-Login==0.4.1
Flask-Mail==0.9.1
Flask-Migrate==2.6.0
Flask-OpenID==1.2.5
Flask-RESTful==0.3.8
Flask-SQLAlchemy==2.4.4
Flask-WTF==0.14.3
gitdb==4.0.5
GitPython==3.1.13
html5lib==1.1
htmlmin==0.1.12
idna==2.10
importlib-metadata==3.4.0
importlib-resources==5.1.0
itsdangerous==1.1.0
Jinja2==2.11.3
json2html==1.3.0
jsonschema==3.2.0
lxml==4.6.2
Mako==1.1.4
MarkupSafe==1.1.1
marshmallow==3.10.0
marshmallow-enum==1.5.1
marshmallow-sqlalchemy==0.23.1
prettytable==2.0.0
prison==0.1.3
psycopg2==2.7.7
psycopg2-binary==2.8.6
pycparser==2.20
PyJWT==1.7.1
pyOpenSSL==20.0.1
pyrsistent==0.17.3
pysaml2==6.5.1
python-crontab==2.5.1
python-dateutil==2.8.1
python-dotenv==0.15.0
python-editor==1.0.4
python3-openid==3.2.0
pytz==2021.1
PyYAML==5.4.1
requests==2.25.1
six==1.15.0
smmap==3.0.5
soupsieve==2.1
SQLAlchemy==1.3.23
SQLAlchemy-Utils==0.36.8
typing-extensions==3.7.4.3
urllib3==1.26.3
wcwidth==0.2.5
webencodings==0.5.1
Werkzeug==1.0.1
WTForms==2.3.3
xmlschema==1.5.0
xmltodict==0.12.0
zipp==3.4.0
```
### Describe the expected results
When using the edit button it takes us to the edit page delivered by appbuilder, but it gives us this error:
```
Traceback (most recent call last):
File "/opt/webapps/iamp/venv/lib64/python3.6/site-packages/flask/app.py", line 2464, in __call__
return self.wsgi_app(environ, start_response)
File "/opt/webapps/iamp/venv/lib64/python3.6/site-packages/flask/app.py", line 2450, in wsgi_app
response = self.handle_exception(e)
File "/opt/webapps/iamp/venv/lib64/python3.6/site-packages/flask_restful/__init__.py", line 272, in error_router
return original_handler(e)
File "/opt/webapps/iamp/venv/lib64/python3.6/site-packages/flask/app.py", line 1867, in handle_exception
reraise(exc_type, exc_value, tb)
File "/opt/webapps/iamp/venv/lib64/python3.6/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/opt/webapps/iamp/venv/lib64/python3.6/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/opt/webapps/iamp/venv/lib64/python3.6/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/opt/webapps/iamp/venv/lib64/python3.6/site-packages/flask_restful/__init__.py", line 272, in error_router
return original_handler(e)
File "/opt/webapps/iamp/venv/lib64/python3.6/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/opt/webapps/iamp/venv/lib64/python3.6/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/opt/webapps/iamp/venv/lib64/python3.6/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/opt/webapps/iamp/venv/lib64/python3.6/site-packages/flask_debugtoolbar/__init__.py", line 125, in dispatch_request
return view_func(**req.view_args)
File "/usr/lib64/python3.6/cProfile.py", line 109, in runcall
return func(*args, **kw)
File "/opt/webapps/iamp/venv/lib64/python3.6/site-packages/flask_appbuilder/security/decorators.py", line 109, in wraps
return f(self, *args, **kwargs)
File "/opt/webapps/iamp/venv/lib64/python3.6/site-packages/flask_appbuilder/views.py", line 610, in edit
related_views=self._related_views,
File "/opt/webapps/iamp/venv/lib64/python3.6/site-packages/flask_appbuilder/baseviews.py", line 281, in render_template
template, **dict(list(kwargs.items()) + list(self.extra_args.items()))
File "/opt/webapps/iamp/venv/lib64/python3.6/site-packages/flask/templating.py", line 140, in render_template
ctx.app,
File "/opt/webapps/iamp/venv/lib64/python3.6/site-packages/flask/templating.py", line 120, in _render
rv = template.render(context)
File "/opt/webapps/iamp/venv/lib64/python3.6/site-packages/jinja2/environment.py", line 1090, in render
self.environment.handle_exception()
File "/opt/webapps/iamp/venv/lib64/python3.6/site-packages/jinja2/environment.py", line 832, in handle_exception
reraise(*rewrite_traceback_stack(source=source))
File "/opt/webapps/iamp/venv/lib64/python3.6/site-packages/jinja2/_compat.py", line 28, in reraise
raise value.with_traceback(tb)
File "/opt/webapps/iamp/venv/lib64/python3.6/site-packages/flask_appbuilder/templates/appbuilder/general/model/edit.html", line 2, in top-level template code
{% import 'appbuilder/general/lib.html' as lib %}
File "/opt/webapps/iamp/venv/lib64/python3.6/site-packages/flask_appbuilder/templates/appbuilder/base.html", line 1, in top-level template code
{% extends base_template %}
File "/opt/webapps/iamp/IAMP/app/templates/appbuilder/baselayout.html", line 4, in top-level template code
{% set languages = appbuilder.languages %}
File "/opt/webapps/iamp/IAMP/app/templates/appbuilder/init.html", line 59, in top-level template code
{% block body %}
File "/opt/webapps/iamp/IAMP/app/templates/appbuilder/baselayout.html", line 25, in block "body"
{% block content %}
File "/opt/webapps/iamp/venv/lib64/python3.6/site-packages/flask_appbuilder/templates/appbuilder/general/model/edit.html", line 5, in block "content"
{{ lib.panel_begin(title, "edit") }}
TypeError: macro 'panel_begin' takes not more than 1 argument(s)
```
It says that panel_begin only takes 1 argument even though when we take a look at lib.html the macro accepts 2. We upgraded our modules from 2.1.5 to 3.1.1 as well as other package versions.
This occurs for the edit, as well as the show pages. It seems anything that uses the panel_begin method with 2 objects passed to it errors out.
|
closed
|
2021-02-09T20:07:06Z
|
2021-02-10T15:18:36Z
|
https://github.com/dpgaspar/Flask-AppBuilder/issues/1563
|
[] |
danner26
| 2
|
coqui-ai/TTS
|
python
| 3,454
|
[Bug] Docker Image configuration error when running TTS server.
|
### Describe the bug
VITS is working fine but a number of other multilingual models are failing to run because of a configuration issue.
A partial list of the models that don't work are:
tts_models/multilingual/multi-dataset/xtts_v2
tts_models/multilingual/multi-dataset/bark
tts_models/en/multi-dataset/tortoise-v2
### To Reproduce
Download and run the docker image on windows 10 following the Tutorial instruction [here:](https://docs.coqui.ai/en/dev/docker_images.html)
The setting I used was GPU = true.
### Expected behavior
Models should run.
### Logs
```shell
StackTrace:
root@709cd4fb2c7c:~# python3 TTS/server/server.py --use_cuda true --model_name tts_models/multilingual/multi-dataset/xtts_v2
> tts_models/multilingual/multi-dataset/xtts_v2 is already downloaded.
Traceback (most recent call last):
File "/root/TTS/server/server.py", line 104, in <module>
synthesizer = Synthesizer(
File "/root/TTS/utils/synthesizer.py", line 93, in __init__
self._load_tts(tts_checkpoint, tts_config_path, use_cuda)
File "/root/TTS/utils/synthesizer.py", line 183, in _load_tts
self.tts_config = load_config(tts_config_path)
File "/root/TTS/config/__init__.py", line 82, in load_config
ext = os.path.splitext(config_path)[1]
File "/usr/lib/python3.10/posixpath.py", line 118, in splitext
p = os.fspath(p)
TypeError: expected str, bytes or os.PathLike object, not NoneType
```
### Environment
```shell
"CUDA": {
"GPU": [
"NVIDIA GeForce RTX 3060"
],
"available": true,
"version": "11.8"
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.1.1+cu118",
"TTS": "0.22.0",
"numpy": "1.22.0"
},
"System": {
"OS": "Linux",
"architecture": [
"64bit",
""
],
"processor": "x86_64",
"python": "3.10.12",
"version": "#1 SMP Thu Oct 5 21:02:42 UTC 2023"
}
```
### Additional context
I did a git clone of the latest repo into the docker container and reinstalled all of the dependencies and the error still occurs so I'm guess its still an unresolved issue.
_No response_
|
closed
|
2023-12-20T18:06:26Z
|
2025-01-15T16:58:31Z
|
https://github.com/coqui-ai/TTS/issues/3454
|
[
"bug",
"wontfix"
] |
EvarDion
| 12
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
deep-learning
| 1,176
|
l2 regularisation
|
Hello,
I want to add l2 regularisation .Can you tell me where can I add this line:
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4, weight_decay=1e-5)
|
open
|
2020-11-06T20:03:48Z
|
2020-11-25T18:01:54Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1176
|
[] |
SurbhiKhushu
| 1
|
microsoft/nni
|
tensorflow
| 5,342
|
nni webportal doesn't show
|
**Describe the issue**:
webportal doesn't appear after executing nnictl create --config config_detailed.yml
**Environment**: Google Cloud VM
- NNI version: 2.10
- Training service (local|remote|pai|aml|etc):
- Client OS: Ubuntu 20
- Server OS (for remote mode only):
- Python version:
- PyTorch/TensorFlow version:
- Is conda/virtualenv/venv used?: Yes
- Is running in Docker?: No
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**:
|
closed
|
2023-02-08T22:07:06Z
|
2023-02-17T02:50:52Z
|
https://github.com/microsoft/nni/issues/5342
|
[] |
yiqiaoc11
| 5
|
huggingface/transformers
|
deep-learning
| 36,123
|
torch._subclasses.fake_tensor.DataDependentOutputException: aten._local_scalar_dense.default with `_prepare_4d_attention_mask_for_sdpa(
|
> Hello @fxmarty
>
> When I try using torch.compile by using `_attn_implementation="sdpa"` in `BertConfig`, I get the error coming from `_prepare_4d_attention_mask_for_sdpa()` whichis because of the data dependent flow.
>
> Specifically,
>
>
>
> ```
>
> File "/home/amodab01/anaconda3/envs/ml_training/lib/python3.11/site-packages/transformers/models/bert/modeling_bert.py", line 1108, in forward
>
> extended_attention_mask = _prepare_4d_attention_mask_for_sdpa(
>
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>
> File "/home/amodab01/anaconda3/envs/ml_training/lib/python3.11/site-packages/transformers/modeling_attn_mask_utils.py", line 448, in _prepare_4d_attention_mask_for_sdpa
>
> if not is_tracing and torch.all(mask == 1):
>
> File "/home/amodab01/anaconda3/envs/ml_training/lib/python3.11/site-packages/torch/_subclasses/functional_tensor.py", line 411, in __torch_dispatch__
>
> outs_unwrapped = func._op_dk(
>
> ^^^^^^^^^^^^
>
> File "/home/amodab01/anaconda3/envs/ml_training/lib/python3.11/site-packages/torch/utils/_stats.py", line 20, in wrapper
>
> return fn(*args, **kwargs)
>
> ^^^^^^^^^^^^^^^^^^^
>
> File "/home/amodab01/anaconda3/envs/ml_training/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 896, in __torch_dispatch__
>
> return self.dispatch(func, types, args, kwargs)
>
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>
> File "/home/amodab01/anaconda3/envs/ml_training/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 1241, in dispatch
>
> return self._cached_dispatch_impl(func, types, args, kwargs)
>
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>
> File "/home/amodab01/anaconda3/envs/ml_training/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 974, in _cached_dispatch_impl
>
> output = self._dispatch_impl(func, types, args, kwargs)
>
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>
> File "/home/amodab01/anaconda3/envs/ml_training/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 1431, in _dispatch_impl
>
> op_impl_out = op_impl(self, func, *args, **kwargs)
>
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>
> File "/home/amodab01/anaconda3/envs/ml_training/lib/python3.11/site-packages/torch/_subclasses/fake_impls.py", line 150, in dispatch_to_op_implementations_dict
>
> return op_implementations_dict[func](fake_mode, func, *args, **kwargs)
>
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>
> File "/home/amodab01/anaconda3/envs/ml_training/lib/python3.11/site-packages/torch/_subclasses/fake_impls.py", line 284, in local_scalar_dense
>
> raise DataDependentOutputException(func)
>
> torch._subclasses.fake_tensor.DataDependentOutputException: aten._local_scalar_dense.default
>
> ```
>
> Is this related to https://github.com/pytorch/pytorch/pull/120400, and do you anticipate there's any solution to this? Ofcourse turning SDPA off works
>
>
_Originally posted by @amodab01 in [221aaec](https://github.com/huggingface/transformers/commit/221aaec6ecf7558e4956dadd662d7d3adb22e420#r152370315)_
|
closed
|
2025-02-10T19:31:51Z
|
2025-03-21T08:04:39Z
|
https://github.com/huggingface/transformers/issues/36123
|
[] |
amodab01
| 1
|
mars-project/mars
|
pandas
| 2,720
|
[BUG] index out of range when using Mars with XGBOOST
|
<!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
IndexError: list assignment index out of range
**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version
Python 3.7.7 on ray docker 1.9
2. The version of Mars you use
0.9.0a2
3. Versions of crucial packages, such as numpy, scipy and pandas
pip install xgboost
pip install "xgboost_ray"
pip install lightgbm
4. Full stack of the error.
(base) ray@eb0b527fa9ea:~/ray/ray$ python main.py
2022-02-16 12:39:47,496 WARNING ray.py:301 -- Ray is not started, start the local ray cluster by `ray.init`.
2022-02-16 12:39:50,168 INFO services.py:1340 -- View the Ray dashboard at http://127.0.0.1:8265
2022-02-16 12:39:51,580 INFO driver.py:34 -- Setup cluster with {'ray://ray-cluster-1645043987/0': {'CPU': 8}, 'ray://ray-cluster-1645043987/1': {'CPU': 8}, 'ray://ray-cluster-1645043987/2': {'CPU': 8}, 'ray://ray-cluster-1645043987/3': {'CPU': 8}}
2022-02-16 12:39:51,581 INFO driver.py:36 -- Creating placement group ray-cluster-1645043987 with bundles [{'CPU': 8}, {'CPU': 8}, {'CPU': 8}, {'CPU': 8}].
2022-02-16 12:39:51,716 INFO driver.py:50 -- Create placement group success.
2022-02-16 12:39:52,978 INFO ray.py:479 -- Create supervisor on node ray://ray-cluster-1645043987/0/0 succeeds.
2022-02-16 12:39:53,230 INFO ray.py:489 -- Start services on supervisor ray://ray-cluster-1645043987/0/0 succeeds.
2022-02-16 12:40:07,025 INFO ray.py:498 -- Create 4 workers and start services on workers succeeds.
2022-02-16 12:40:07,036 WARNING ray.py:510 -- Web service started at http://0.0.0.0:46910
0%| | 0/100 [00:00<?, ?it/s]
Traceback (most recent call last):
File "main.py", line 69, in <module>
main()
File "main.py", line 35, in main
df_train, df_test = _load_data(n_samples, n_features, n_classes, test_size=0.2)
File "main.py", line 25, in _load_data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, random_state=shuffle_seed)
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/learn/model_selection/_split.py", line 145, in train_test_split
session=session, **(run_kwargs or dict())
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/core/entity/executable.py", line 221, in execute
ret = execute(*self, session=session, **kw)
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/deploy/oscar/session.py", line 1779, in execute
**kwargs,
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/deploy/oscar/session.py", line 1574, in execute
timeout=self._isolated_session.timeout
File "/home/ray/anaconda3/lib/python3.7/concurrent/futures/_base.py", line 435, in result
return self.__get_result()
File "/home/ray/anaconda3/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/deploy/oscar/session.py", line 1725, in _execute
asyncio.shield(execution_info), progress_update_interval
File "/home/ray/anaconda3/lib/python3.7/asyncio/tasks.py", line 442, in wait_for
return fut.result()
File "/home/ray/anaconda3/lib/python3.7/asyncio/tasks.py", line 630, in _wrap_awaitable
return (yield from awaitable.__await__())
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/deploy/oscar/session.py", line 102, in wait
return await self._aio_task
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/deploy/oscar/session.py", line 903, in _run_in_background
raise task_result.error.with_traceback(task_result.traceback)
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/services/task/supervisor/processor.py", line 57, in inner
return await func(processor, *args, **kwargs)
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/services/task/supervisor/processor.py", line 336, in get_next_stage_processor
chunk_graph = await self._get_next_chunk_graph(self._chunk_graph_iter)
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/services/task/supervisor/processor.py", line 266, in _get_next_chunk_graph
chunk_graph = await fut
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/lib/aio/_threads.py", line 36, in to_thread
return await loop.run_in_executor(None, func_call)
File "/home/ray/anaconda3/lib/python3.7/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/services/task/supervisor/processor.py", line 261, in next_chunk_graph
return next(chunk_graph_iter)
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/services/task/supervisor/preprocessor.py", line 158, in tile
for chunk_graph in chunk_graph_builder.build():
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/core/graph/builder/chunk.py", line 272, in build
yield from self._build()
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/core/graph/builder/chunk.py", line 266, in _build
graph = next(tile_iterator)
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/services/task/supervisor/preprocessor.py", line 75, in __iter__
to_update_tileables = self._iter()
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/core/graph/builder/chunk.py", line 204, in _iter
visited,
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/core/graph/builder/chunk.py", line 113, in _tile
need_process = next(tile_handler)
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/core/graph/builder/chunk.py", line 84, in _tile_handler
tiled_tileables = yield from handler.tile(tiled_tileables)
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/core/entity/tileables.py", line 79, in tile
tiled_result = yield from tile_handler(op)
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/learn/utils/shuffle.py", line 217, in tile
inp = yield from cls._safe_rechunk(inp, ax_nsplit)
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/learn/utils/shuffle.py", line 144, in _safe_rechunk
return (yield from recursive_tile(tileable.rechunk(ax_nsplit)))
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/tensor/rechunk/rechunk.py", line 103, in rechunk
chunk_size = get_nsplits(tensor, chunk_size, tensor.dtype.itemsize)
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/tensor/rechunk/core.py", line 34, in get_nsplits
chunk_size[idx] = c
IndexError: list assignment index out of range
(RayMainPool pid=5574) Unexpected error happens in <function TaskProcessor.get_next_stage_processor at 0x7f276f2458c0>
(RayMainPool pid=5574) Traceback (most recent call last):
(RayMainPool pid=5574) File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/services/task/supervisor/processor.py", line 57, in inner
(RayMainPool pid=5574) return await func(processor, *args, **kwargs)
(RayMainPool pid=5574) File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/services/task/supervisor/processor.py", line 336, in get_next_stage_processor
(RayMainPool pid=5574) chunk_graph = await self._get_next_chunk_graph(self._chunk_graph_iter)
(RayMainPool pid=5574) File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/services/task/supervisor/processor.py", line 266, in _get_next_chunk_graph
(RayMainPool pid=5574) chunk_graph = await fut
(RayMainPool pid=5574) File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/lib/aio/_threads.py", line 36, in to_thread
(RayMainPool pid=5574) return await loop.run_in_executor(None, func_call)
(RayMainPool pid=5574) File "/home/ray/anaconda3/lib/python3.7/concurrent/futures/thread.py", line 57, in run
(RayMainPool pid=5574) result = self.fn(*self.args, **self.kwargs)
(RayMainPool pid=5574) File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/services/task/supervisor/processor.py", line 261, in next_chunk_graph
(RayMainPool pid=5574) return next(chunk_graph_iter)
(RayMainPool pid=5574) File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/services/task/supervisor/preprocessor.py", line 158, in tile
(RayMainPool pid=5574) for chunk_graph in chunk_graph_builder.build():
(RayMainPool pid=5574) File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/core/graph/builder/chunk.py", line 272, in build
(RayMainPool pid=5574) yield from self._build()
(RayMainPool pid=5574) File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/core/graph/builder/chunk.py", line 266, in _build
(RayMainPool pid=5574) graph = next(tile_iterator)
(RayMainPool pid=5574) File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/services/task/supervisor/preprocessor.py", line 75, in __iter__
(RayMainPool pid=5574) to_update_tileables = self._iter()
(RayMainPool pid=5574) File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/core/graph/builder/chunk.py", line 204, in _iter
(RayMainPool pid=5574) visited,
(RayMainPool pid=5574) File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/core/graph/builder/chunk.py", line 113, in _tile
(RayMainPool pid=5574) need_process = next(tile_handler)
(RayMainPool pid=5574) File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/core/graph/builder/chunk.py", line 84, in _tile_handler
(RayMainPool pid=5574) tiled_tileables = yield from handler.tile(tiled_tileables)
(RayMainPool pid=5574) File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/core/entity/tileables.py", line 79, in tile
(RayMainPool pid=5574) tiled_result = yield from tile_handler(op)
(RayMainPool pid=5574) File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/learn/utils/shuffle.py", line 217, in tile
(RayMainPool pid=5574) inp = yield from cls._safe_rechunk(inp, ax_nsplit)
(RayMainPool pid=5574) File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/learn/utils/shuffle.py", line 144, in _safe_rechunk
(RayMainPool pid=5574) return (yield from recursive_tile(tileable.rechunk(ax_nsplit)))
(RayMainPool pid=5574) File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/tensor/rechunk/rechunk.py", line 103, in rechunk
(RayMainPool pid=5574) chunk_size = get_nsplits(tensor, chunk_size, tensor.dtype.itemsize)
(RayMainPool pid=5574) File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/tensor/rechunk/core.py", line 34, in get_nsplits
(RayMainPool pid=5574) chunk_size[idx] = c
(RayMainPool pid=5574) IndexError: list assignment index out of range
5. Minimized code to reproduce the error.
```python
import logging
import ray
import mars
import numpy as np
import mars.dataframe as md
from mars.learn.model_selection import train_test_split
from mars.learn.datasets import make_classification
from xgboost_ray import RayDMatrix, RayParams, train, predict
logger = logging.getLogger(__name__)
logging.basicConfig(format=ray.ray_constants.LOGGER_FORMAT, level=logging.INFO)
def _load_data(n_samples: int,
n_features:int,
n_classes: int,
test_size: float = 0.1,
shuffle_seed: int = 42):
n_informative = int(n_features * 0.5)
n_redundant = int(n_features * 0.2)
# generate dataset
X, y = make_classification(n_samples=n_samples, n_features=n_features, n_classes=n_classes, n_informative=n_informative, n_redundant=n_redundant, random_state=shuffle_seed)
X, y = md.DataFrame(X), md.DataFrame({"labels": y})
X.columns = ['feature-' + str(i) for i in range(n_features)]
# split dataset
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, random_state=shuffle_seed)
return md.concat([X_train, y_train], axis=1), md.concat([X_test, y_test], axis=1)
def main(*args):
n_samples, n_features, worker_num, worker_cpu, num_shards = 10 ** 4, 20, 4, 8, 10
ray_params = RayParams(num_actors=10, cpus_per_actor=1)
# setup mars
mars.new_ray_session(worker_num=worker_num, worker_cpu=worker_cpu, worker_mem=1 * 1024 ** 3)
n_classes = 10
df_train, df_test = _load_data(n_samples, n_features, n_classes, test_size=0.2)
print(df_train)
print(df_test)
# convert mars DataFrame to Ray dataset
ds_train = md.to_ray_dataset(df_train, num_shards=num_shards)
ds_test = md.to_ray_dataset(df_test, num_shards=num_shards)
train_set = RayDMatrix(data=ds_train, label="labels")
test_set = RayDMatrix(data=ds_test, label="labels")
evals_result = {}
params = {
'nthread': 1,
'objective': 'multi:softmax',
'eval_metric': ['mlogloss', 'merror'],
'num_class': n_classes,
'eta': 0.1,
'seed': 42
}
bst = train(
params=params,
dtrain=train_set,
num_boost_round=200,
evals=[(train_set, 'train')],
evals_result=evals_result,
verbose_eval=100,
ray_params=ray_params
)
# predict on a test set.
pred = predict(bst, test_set, ray_params=ray_params)
precision = (ds_test.dataframe['labels'].to_pandas() == pred).astype(int).sum() / ds_test.dataframe.shape[0]
logger.info("Prediction Accuracy: %.4f", precision)
if __name__ == "__main__":
main()
```
**Expected behavior**
train_test_split would fail and emit "IndexError: list assignment index out of range".
**Additional context**
Add any other context about the problem here.
|
closed
|
2022-02-16T20:45:14Z
|
2022-02-20T11:06:51Z
|
https://github.com/mars-project/mars/issues/2720
|
[
"type: bug",
"mod: tensor",
"prio: high",
"task: medium"
] |
jyizheng
| 1
|
davidsandberg/facenet
|
tensorflow
| 981
|
how can i run compare.py on windows
|
open
|
2019-02-24T19:22:52Z
|
2019-04-03T22:16:27Z
|
https://github.com/davidsandberg/facenet/issues/981
|
[] |
mohammedSamirMady
| 1
|
|
sinaptik-ai/pandas-ai
|
data-science
| 1,565
|
pandasai-openai source code
|
Hi Team,
Regarding pandasai-openai library, could you please let me know the git repository location for the same?
Thanks and Regards
Sumeet Lalla
|
closed
|
2025-01-30T17:23:26Z
|
2025-01-30T17:42:22Z
|
https://github.com/sinaptik-ai/pandas-ai/issues/1565
|
[] |
prasum
| 2
|
scikit-learn/scikit-learn
|
python
| 31,020
|
⚠️ CI failed on Check sdist (last failure: Mar 20, 2025) ⚠️
|
**CI is still failing on [Check sdist](https://github.com/scikit-learn/scikit-learn/actions/runs/13959330746)** (Mar 20, 2025)
|
closed
|
2025-03-19T00:27:53Z
|
2025-03-20T12:26:52Z
|
https://github.com/scikit-learn/scikit-learn/issues/31020
|
[
"Needs Triage"
] |
scikit-learn-bot
| 1
|
piskvorky/gensim
|
machine-learning
| 3,017
|
Inconsistency within documentation
|
<!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
Hi, I found inconsistency within your documentation.
In some examples `AnnoyIndexer` is imported from `gensim.similarities.annoy` and in some from `gensim.similarities.index`.
I tried to import form both but only `gensim.similarities.index` works.
#### Steps/code/corpus to reproduce
Go to documentation: https://radimrehurek.com/gensim/similarities/annoy.html .
#### Versions
```
Windows-10-10.0.19041-SP0
Python 3.7.9 (default, Aug 31 2020, 17:10:11) [MSC v.1916 64 bit (AMD64)]
Bits 64
NumPy 1.19.2
SciPy 1.5.2
gensim 3.8.3
FAST_VERSION 1
```
|
closed
|
2020-12-27T13:07:01Z
|
2020-12-27T15:44:47Z
|
https://github.com/piskvorky/gensim/issues/3017
|
[] |
JakovGlavac
| 1
|
indico/indico
|
sqlalchemy
| 6,601
|
Peer-review module: Allow to ask all participants to an event to be in the reviewer team
|
**Is your feature request related to a problem? Please describe.**
For the IPAC conference series, we use the indico Peer-Review module.
All conference participants except students and industrial can be asked to serve as reviewer.
For IPAC'23 I had to manually import reviewers. As LPR manager I had no access to the participants list, so I add to ask the LOC send me excel files of the participants to re-import them as reviewer.
It would be good to have the option to have all event participants included in the reviewers team.
**Describe the solution you'd like**
Have the option to have all event participants included in the reviewers team.
**Describe alternatives you've considered**
Download conference participants in an excel file and run a script that uploads the conference participants in the reviewing teams
**Additional context**
It would be good to have a tick box here where instead of having names, we could select "use all events participants".
<img width="767" alt="copie_ecran 2024-11-06 à 15 10 01" src="https://github.com/user-attachments/assets/36a3af27-a871-4e51-a0f1-2faafd3bce1c">
Thank you in advance,
Nicolas
|
open
|
2024-11-06T14:18:09Z
|
2024-11-06T15:10:53Z
|
https://github.com/indico/indico/issues/6601
|
[
"enhancement"
] |
NicolasDelerueLAL
| 3
|
deepfakes/faceswap
|
machine-learning
| 683
|
ImportError: cannot import name run
|
➜ faceswap git:(master) python setup.py
Traceback (most recent call last):
File "setup.py", line 12, in <module>
from subprocess import CalledProcessError, run, PIPE, Popen
ImportError: cannot import name run
|
closed
|
2019-03-24T08:47:42Z
|
2019-03-25T09:47:42Z
|
https://github.com/deepfakes/faceswap/issues/683
|
[] |
nbhhcty
| 2
|
seleniumbase/SeleniumBase
|
web-scraping
| 3,361
|
Need help about Profile Chrome
|
How to run uc_driver with UC mode as specific profile
Like:
```
from seleniumbase import Driver
from seleniumbase import undetected
user_data_dir = "C:\\Users\\ripssas\\AppData\\Local\\Google\\Chrome\\User Data"
option = undetected.ChromeOptions()
option.add_argument(r"--user-data-dir=user_data_dir")
option.add_argument(r'--profile-directory=Profile 2')
driver = Driver(uc=True)
driver = Driver.Chrome(options = option)
url = "https://freebitco.in"
driver.uc_open_with_reconnect(url, 4)
driver.uc_gui_click_captcha()
driver.quit()
```
I tried about and got this error
```
PS C:\Users\ripas\Downloads\Web LAB\Botasaurus Firefox> py "3rd Bot.py"
Traceback (most recent call last):
File "C:\Users\ripas\Downloads\Web LAB\Botasaurus Firefox\3rd Bot.py", line 10, in <module>
driver = webdriver.Chrome(options = option)
File "C:\Users\ripas\AppData\Local\Programs\Python\Python313\Lib\site-packages\selenium\webdriver\chrome\webdriver.py", line 45, in __init__
super().__init__(
~~~~~~~~~~~~~~~~^
browser_name=DesiredCapabilities.CHROME["browserName"],
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<3 lines>...
keep_alive=keep_alive,
^^^^^^^^^^^^^^^^^^^^^^
)
^
File "C:\Users\ripas\AppData\Local\Programs\Python\Python313\Lib\site-packages\selenium\webdriver\chromium\webdriver.py", line 66, in __init__
super().__init__(command_executor=executor, options=options)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\ripas\AppData\Local\Programs\Python\Python313\Lib\site-packages\selenium\webdriver\remote\webdriver.py", line 241, in __init__
self.start_session(capabilities)
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^
File "C:\Users\ripas\AppData\Local\Programs\Python\Python313\Lib\site-packages\selenium\webdriver\remote\webdriver.py", line 329, in start_session
response = self.execute(Command.NEW_SESSION, caps)["value"]
~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\ripas\AppData\Local\Programs\Python\Python313\Lib\site-packages\selenium\webdriver\remote\webdriver.py", line 384, in execute
self.error_handler.check_response(response)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^
File "C:\Users\ripas\AppData\Local\Programs\Python\Python313\Lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 232, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.SessionNotCreatedException: Message: session not created: Chrome failed to start: crashed.
(session not created: DevToolsActivePort file doesn't exist)
(The process started from chrome location C:\Program Files\Google\Chrome\Application\chrome.exe is no longer running, so ChromeDriver is assuming that Chrome has crashed.)
Stacktrace:
GetHandleVerifier [0x00007FF7EBF9FB05+28789]
(No symbol) [0x00007FF7EBF086E0]
(No symbol) [0x00007FF7EBDA592A]
(No symbol) [0x00007FF7EBDE01E4]
(No symbol) [0x00007FF7EBDDBC59]
(No symbol) [0x00007FF7EBE2A77C]
(No symbol) [0x00007FF7EBE29D60]
(No symbol) [0x00007FF7EBE1F1E3]
(No symbol) [0x00007FF7EBDEA938]
(No symbol) [0x00007FF7EBDEBAA1]
GetHandleVerifier [0x00007FF7EC2D933D+3410093]
GetHandleVerifier [0x00007FF7EC2EE7DD+3497293]
GetHandleVerifier [0x00007FF7EC2E2A73+3448803]
GetHandleVerifier [0x00007FF7EC067BBB+848171]
(No symbol) [0x00007FF7EBF13C3F]
(No symbol) [0x00007FF7EBF0F6E4]
(No symbol) [0x00007FF7EBF0F87D]
(No symbol) [0x00007FF7EBEFED49]
BaseThreadInitThunk [0x00007FF8A78A53E0+16]
RtlUserThreadStart [0x00007FF8A838485B+43]
```
I have no solution about this due to i have little experience about this
|
closed
|
2024-12-22T15:54:56Z
|
2024-12-24T13:10:41Z
|
https://github.com/seleniumbase/SeleniumBase/issues/3361
|
[
"duplicate",
"invalid usage",
"UC Mode / CDP Mode"
] |
thienha1
| 4
|
darrenburns/posting
|
automation
| 80
|
Support for encrypting/decrypting passwords for BasicAuth
|
Hi,
I'd like to propose a feature, which lets the user set a master password via ENV variable, which is then used to encrypt/decrypt the password field in the *.posting.yaml file
Current situation for a *.posting.yaml file
```
name: Get Player
description: Gets a player by id
url: https://localhost:8080/player/4
auth:
type: basic
basic:
username: test
password: test
headers:
- name: Content-Type
value: application/json
```
Proposed change:
```
name: Get Player
description: Gets a player by id
url: https://localhost:8080/player/4
auth:
type: basic
basic:
username: test
enc_password: U2FsdGVkX18UCL1XW/Xxg7oyj7sBlg8p0ot+f3rW6Lc=
headers:
- name: Content-Type
value: application/json
```
The example "enc_password" was generated with
`echo "test" | openssl enc -e -aes-256-cbc -a -salt -pbkdf2`
and "123456" as password.
Reason for this proposal:
Encrypting the BasicAuth credentials would it make easier to share a larger number of collections safely, for example, via VCS without the possibility to accidentaly push the plaintext password. Also then there would be no need to remove the plaintext credentials, before pushing such a collection (or more collections), by hand and have to type
them in again after i cloned the repo.
The following steps show how this could work in the application:
* Set ENV variable POSTING_MASTER_PASSWORD=123456
* Start posting and load collection by "posting --collection myCollection"
** Decrypt the password field with POSTING_MASTER_PASSWORD and set the plaintext password together with the username in the "Auth" tab
* Save collection
** Encrypt the value of the password field with POSTING_MASTER_PASSWORD and write it back to the *.posting.yaml file
If the ENV variable is not set, the default behaveior (no encryption/decryption) will be activated
Thanks for this awesome tool and best regards
|
closed
|
2024-08-10T16:02:58Z
|
2024-11-18T18:03:06Z
|
https://github.com/darrenburns/posting/issues/80
|
[] |
sczsh
| 0
|
autokey/autokey
|
automation
| 388
|
Dont delete external folders
|
When the drop-down menu is used to add an external Folder, removing it from the tree view should *never* delete the directory from disk, especially not recursively. This is a dangerous design, thankfully nothing too critical from my devel directory was blown away and that I hadn't added my home directory by mistake.
|
closed
|
2020-03-21T23:24:05Z
|
2020-03-22T10:44:18Z
|
https://github.com/autokey/autokey/issues/388
|
[
"enhancement"
] |
morganrallen
| 4
|
jina-ai/serve
|
fastapi
| 5,587
|
relax protobuf dependency up to 3.19 (the version pre-installed on colab) and support it
|
colab comes with the pre-install protobuf version 3.19.
However, currently jina supports protobuf >= 3.20 which means installing jina will install latest protobuf.
This will break the compatibility with tensorflow on colab
|
closed
|
2023-01-10T09:01:11Z
|
2023-01-16T16:39:25Z
|
https://github.com/jina-ai/serve/issues/5587
|
[] |
alaeddine-13
| 1
|
httpie/cli
|
rest-api
| 903
|
Can order between JSON and files be controlled in multipart form?
|
I'm wondering if the order between JSON fields and file attachments can be controlled.
I need to test my api by having files come before JSON data in the stream. It seems that httpie always sends JSON first and then files.
|
closed
|
2020-04-24T11:36:08Z
|
2020-09-28T12:08:12Z
|
https://github.com/httpie/cli/issues/903
|
[] |
msageryd
| 5
|
dgtlmoon/changedetection.io
|
web-scraping
| 1,729
|
[feature] Non Expiring - Header Requests
|
Hello,
I have some new addition feature to the "request headers feature" you've already implemented last month for JavaScript Fetchers.
I will be happy to sponsor this new feature as well either in this project or a fork if anyone else wants to do it....
Most "header requests" expire after some time, ranging from 5 minutes to a few hours, depending on the server side implementation, and I guess this is done to make the web scrapping harder even if you use pre-defined header requests as a bypass. This makes the "import headers from txt file" not so usefull, because even if I add a list of 10 brand new "header requests" over there, they will all expire after some time, and than I will have to add another 10 and so on, making this a manual work and not automated...
My idea is for the user, to pre-define a set of "header requests" with the current feature you have already implemented, and than the the follwing steps should happen:
**Step 1.**
ChangeDetection checks for the CSS/JSONPath/JQ/XPath Filters , using the "request headers" provided by the user and sends notification to user if there is the case
**Step 2.**
Before starting a new check with the old headers, ChangeDetection reloads/refreshes the same page automatically to get a brand new set of "request headers" from the server, so it can use for the next check
**Step 3.**
ChangeDetection uses the brand new headers grabbed at Step 2 instead of the "request headers" provided by the user at Step 1 for the next CSS/JSONPath/JQ/XPath check
This can be extrapolated to all follwing checks, so each time the check will have a brand new set of "header requests"
The method described above will always have new "header requests" for each check, because ChangeDetection can get new "request headers" after each page refresh it makes. The user only provides one set of "request headers" for the first check, than ChangeDetection takes care of the rest and uses new "headers requests" for each following checks, grabbed from the previous check.
Please let me know if that can be implemented, or if you have a better solution than the one described by me for the "request headers" not to expire.
I'm looking forward to hearing from you.
P.S - The tool bellow would be a huge game changer for you, as this undetected chromerdriver I have tested with Selenium and Playwright bypasses most bot detection tools I've seen
https://github.com/ultrafunkamsterdam/undetected-chromedriver
|
closed
|
2023-08-08T00:24:30Z
|
2023-10-05T10:40:26Z
|
https://github.com/dgtlmoon/changedetection.io/issues/1729
|
[
"enhancement"
] |
bluescreen222
| 1
|
wkentaro/labelme
|
deep-learning
| 1,037
|
show coordinate
|
can labeleme show the mouse coorfinate? (x,y)
|
closed
|
2022-06-15T08:40:47Z
|
2022-06-25T04:03:31Z
|
https://github.com/wkentaro/labelme/issues/1037
|
[] |
alicera
| 0
|
PokeAPI/pokeapi
|
graphql
| 624
|
ARM Builds
|
I've been trying to build PokeAPI for Raspberry Pi, and I finally reached the point where I just decided to use Docker Compose, but I realized that the builds on Docker Hub were all Linux amd64, and there were no ARM builds. Are ARM builds planned any time soon?
|
closed
|
2021-06-05T04:46:37Z
|
2021-06-06T14:28:14Z
|
https://github.com/PokeAPI/pokeapi/issues/624
|
[] |
MusicDev33
| 6
|
dgtlmoon/changedetection.io
|
web-scraping
| 1,805
|
[bug] windows - URLs with extended chars cause backup to not run - 500 Internal Server Error
|
**Describe the bug**
On the locally hosted Python version the Backup link in the navbar leads to a 500 Internal Server Error page.
Tried updating and restarting the app, same result.
**Version**
v0.45.1
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'Backup'
4. See error
**Desktop (please complete the following information):**
- OS: Win10
- Browser: Firefox
- Version: 117.0.1
|
open
|
2023-09-20T16:38:21Z
|
2023-12-11T13:11:38Z
|
https://github.com/dgtlmoon/changedetection.io/issues/1805
|
[
"windows",
"triage"
] |
kazerniel
| 13
|
tiangolo/uwsgi-nginx-flask-docker
|
flask
| 193
|
SSL gives 502.
|
HI!
I use this template to run the container with a secure SSL connection. - Just a simple app "`return 'hello, vasya' `"
All fine, when I run it on localhost, or on a server when I open it via IP, but when I try to open site via domain name over https, I get an error:
**502 Bad Gateway.**
And in Nginx logs, I see **_no live upstreams while connecting to upstream_**
It's interesting because I have 10 containers with FastAPI on the same server all with HTTPS and all works fine.
For SSL proxy I use this solution:
https://github.com/nginx-proxy/docker-letsencrypt-nginx-proxy-companion
What do you think about it? Is it seems, like a bug?
---UPD---
I checked it with my own Dockerfile and SSL works fine
**_Dockerfile:_**
```
FROM python:3.7.2-slim
COPY ./app /app
WORKDIR "/app"
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
ENTRYPOINT ["python"]
CMD ["main.py"]
```
|
closed
|
2020-07-03T09:02:58Z
|
2020-12-17T00:28:03Z
|
https://github.com/tiangolo/uwsgi-nginx-flask-docker/issues/193
|
[
"answered"
] |
mohovkm
| 2
|
fastapi/sqlmodel
|
sqlalchemy
| 259
|
How to convert sub-query SQLModel objects to Strawberry Objects
|
### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
#---- SQL Models ----
from typing import List, Optional
from sqlmodel import Column, Field, Relationship, SQLModel
class ResourceModel(SQLModel, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
name: str
status_id: Optional[int] = Field(default=None, foreign_key="status.id")
status: Optional[StatusModel] = Relationship(back_populates="resources")
class StatusModel(SQLModel, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
name: str
resources: List["ResourceModel"] = Relationship(back_populates="status")
#----- Strawberry schemas ----
import strawberry
@strawberry.experimental.pydantic.type(ResourcesModel, all_fields=True)
class ResourceGQL:
pass
@strawberry.experimental.pydantic.type(StatusModel, all_fields=True)
class StatusGQL:
pass
#----- Strawberry Resolvers ----
def get_resource(
filters: dict = {},
fields: tuple = (),
session: Session = get_session,
) -> List[ResourceGQL]:
statement = select(ResourceModel) # .options(load_only(fields))
for k in filters:
if filters[k] is not None:
statement = statement.where(getattr(ResourceModel, k) == filters[k])
with session() as conn:
ret = conn.exec(statement).all()
data = list()
for row in ret:
gql_row = ResourceGQL.from_pydantic(row)
print(gql_row)
data.append(gql_row)
return data
#--- Example object output of this resolver ---
ResourceGQL(id=1, name="test", status=StatusModel(id=1, name='github'))
#--- Expected output ----
ResourceGQL(id=1, name="test", status=StatusGQL(id=1, name='github'))
```
### Description
I'm currently building a graphql API using FastApi + SQLModel + Strawberry. When writing my GraphQL resolver (example code provided) I try to hardcode as little as possible to make them as flexible as I can. However, when using a complex model which contains sub queries, the returned data is only parsed at the top level (in this example, using the ResourceGQL schema).
I was wondering if there's pythonic way to define the relationship between SQLModels and Strawberry Schema ones in order not to have to iterate inside the SQLModel class to parse out the data. This is specially relevant when using complex models, where there might be more than 2 levels.
Disclaimer: I'm new to both SQLModel and Strawberry !
Just for clarity: I'm sending the question to the SQLModel repo instead of Strawberry as I think the answer should be somewhere in the definitions of the SQLModel. And if not possible, the iteration needs to be done on the SQLModel object.
### Operating System
Linux
### Operating System Details
_No response_
### SQLModel Version
0.0.6
### Python Version
3.9.7
### Additional Context
_No response_
|
closed
|
2022-03-01T11:29:40Z
|
2022-03-01T11:45:48Z
|
https://github.com/fastapi/sqlmodel/issues/259
|
[
"question"
] |
martarho
| 0
|
jupyterhub/repo2docker
|
jupyter
| 417
|
RStudio Initialization Errors/ Unable to Connect to Service
|
I've been getting these bugs lately, does anyone know how we would suggest debugging a build ?
|
closed
|
2018-09-26T18:11:54Z
|
2019-05-21T18:27:56Z
|
https://github.com/jupyterhub/repo2docker/issues/417
|
[
"bug"
] |
jzf2101
| 2
|
mljar/mercury
|
data-visualization
| 438
|
Option to hide specific cells/output
|
Hello,
I am quite new with Mercury but, looking at it at first stage, trying to convert my very long notebooks into something shareable, I noticed that there is no way to hide some specific cells from the resulting webapp.
I mean, usually, in my notebooks I have some intermediate outputs between the ideal input and the final output.
Is there a way to hide some cells that I do not want to be rendered by mercury?
Like nbconver does, can we assign a custom tag to each of the cell we want to hide and then instruct mercury to skip rendering those specif ones?
SOURCE: https://stackoverflow.com/questions/49907455/hide-code-when-exporting-jupyter-notebook-to-html
|
closed
|
2024-03-30T10:43:15Z
|
2024-04-02T07:15:53Z
|
https://github.com/mljar/mercury/issues/438
|
[] |
informatica92
| 1
|
akfamily/akshare
|
data-science
| 4,972
|
AKShare 接口问题报告
|
在调用ak.get_futures_daily(start_date=‘20240101’, end_date=‘20240618, market=’INE‘)的时候报错KeyError: 'symbol'
,就是没有找到交易数据
发现只有INE交易所不行,其余的几个交易所都可以
另外,上期所再获取20100309和20100310的时候也报错json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
|
closed
|
2024-06-19T01:45:56Z
|
2024-06-19T06:11:22Z
|
https://github.com/akfamily/akshare/issues/4972
|
[
"bug"
] |
EdisonYu777
| 1
|
iperov/DeepFaceLab
|
deep-learning
| 5,633
|
Can't extract data_src faceset on DeepFaceLab_NVIDIA_RTX3000_series
|
I have an error message when trying to extract faceset.
Error while processing data: Traceback (most recent call last):
File "C:\Users\ivand\Downloads\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1375, in _do_call
return fn(*args)
File "C:\Users\ivand\Downloads\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1360, in _run_fn
target_list, run_metadata)
File "C:\Users\ivand\Downloads\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1453, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.NotFoundError: 2 root error(s) found.
(0) Not found: No algorithm worked!
[[{{node Conv2D}}]]
[[Add_29/_141]]
(1) Not found: No algorithm worked!
[[{{node Conv2D}}]]
0 successful operations.
0 derived errors ignored.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\ivand\Downloads\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\joblib\SubprocessorBase.py", line 71, in _subprocess_run
result = self.process_data (data)
File "C:\Users\ivand\Downloads\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\mainscripts\Extractor.py", line 104, in process_data
rects_extractor=self.rects_extractor,
File "C:\Users\ivand\Downloads\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\mainscripts\Extractor.py", line 145, in rects_stage
rects = data.rects = rects_extractor.extract (rotated_image, is_bgr=True)
File "C:\Users\ivand\Downloads\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\facelib\S3FDExtractor.py", line 193, in extract
olist = self.model.run ([ input_image[None,...] ] )
File "C:\Users\ivand\Downloads\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 167, in run
return nn.tf_sess.run ( self.run_output, feed_dict=feed_dict)
File "C:\Users\ivand\Downloads\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 968, in run
run_metadata_ptr)
File "C:\Users\ivand\Downloads\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1191, in _run
feed_dict_tensor, options, run_metadata)
File "C:\Users\ivand\Downloads\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1369, in _do_run
run_metadata)
File "C:\Users\ivand\Downloads\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1394, in _do_call
raise type(e)(node_def, op, message) # pylint: disable=no-value-for-parameter
tensorflow.python.framework.errors_impl.NotFoundError: 2 root error(s) found.
(0) Not found: No algorithm worked!
[[node Conv2D (defined at C:\Users\ivand\Downloads\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:101) ]]
[[Add_29/_141]]
(1) Not found: No algorithm worked!
[[node Conv2D (defined at C:\Users\ivand\Downloads\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:101) ]]
0 successful operations.
0 derived errors ignored.
Errors may have originated from an input operation.
Input Source operations connected to node Conv2D:
S3FD/conv1_1/weight/read (defined at C:\Users\ivand\Downloads\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:61)
Pad (defined at C:\Users\ivand\Downloads\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:87)
Input Source operations connected to node Conv2D:
S3FD/conv1_1/weight/read (defined at C:\Users\ivand\Downloads\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:61)
Pad (defined at C:\Users\ivand\Downloads\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:87)
Original stack trace for 'Conv2D':
File "<string>", line 1, in <module>
File "multiprocessing\spawn.py", line 105, in spawn_main
File "multiprocessing\spawn.py", line 118, in _main
File "multiprocessing\process.py", line 258, in _bootstrap
File "multiprocessing\process.py", line 93, in run
File "C:\Users\ivand\Downloads\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\joblib\SubprocessorBase.py", line 62, in _subprocess_run
self.on_initialize(client_dict)
File "C:\Users\ivand\Downloads\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\mainscripts\Extractor.py", line 73, in on_initialize
self.rects_extractor = facelib.S3FDExtractor(place_model_on_cpu=place_model_on_cpu)
File "C:\Users\ivand\Downloads\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\facelib\S3FDExtractor.py", line 170, in __init__
self.model.build_for_run ([ ( tf.float32, nn.get4Dshape (None,None,3) ) ])
File "C:\Users\ivand\Downloads\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 154, in build_for_run
self.run_output = self.__call__(self.run_placeholders)
File "C:\Users\ivand\Downloads\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
return self.forward(*args, **kwargs)
File "C:\Users\ivand\Downloads\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\facelib\S3FDExtractor.py", line 93, in forward
x = tf.nn.relu(self.conv1_1(x))
File "C:\Users\ivand\Downloads\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\layers\LayerBase.py", line 14, in __call__
return self.forward(*args, **kwargs)
File "C:\Users\ivand\Downloads\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\layers\Conv2D.py", line 101, in forward
x = tf.nn.conv2d(x, weight, strides, 'VALID', dilations=dilations, data_format=nn.data_format)
File "C:\Users\ivand\Downloads\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper
return target(*args, **kwargs)
File "C:\Users\ivand\Downloads\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 2397, in conv2d
name=name)
File "C:\Users\ivand\Downloads\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py", line 972, in conv2d
data_format=data_format, dilations=dilations, name=name)
File "C:\Users\ivand\Downloads\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
attrs=attr_protos, op_def=op_def)
File "C:\Users\ivand\Downloads\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3569, in _create_op_internal
op_def=op_def)
File "C:\Users\ivand\Downloads\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 2045, in __init__
self._traceback = tf_stack.extract_stack_for_node(self._c_op)
0%| | 0/655 [00:04<?, ?it/s]
-------------------------
Images found: 655
Faces detected: 0
-------------------------
I think this error could be caused by Memory allocation failure so it is important to note that I have 16 GB of RAM. Can someone please explain how to fix this issue
|
open
|
2023-03-01T15:22:29Z
|
2023-06-08T23:06:36Z
|
https://github.com/iperov/DeepFaceLab/issues/5633
|
[] |
ivand321
| 9
|
TencentARC/GFPGAN
|
pytorch
| 295
|
Original trainning model for GFPGANv1.3
|
Hi. XinTao
Thanks for your reply!
But I could't find the original bilinear model or the original trainning model for GFPGANv1.3.
I had tried to convert GFPGANv1.3 back to the trainning model, but it didn't work.
Have you ever shared them?
|
closed
|
2022-10-19T09:45:05Z
|
2022-10-19T10:35:48Z
|
https://github.com/TencentARC/GFPGAN/issues/295
|
[] |
fyushan
| 0
|
rio-labs/rio
|
data-visualization
| 171
|
Add Support for Circular Gradients
|
Rio already ships with linear gradients, but circular gradients are conspicuously missing.
|
open
|
2024-11-18T20:33:25Z
|
2024-11-18T20:33:25Z
|
https://github.com/rio-labs/rio/issues/171
|
[
"good first issue",
"new feature"
] |
mad-moo
| 0
|
graphql-python/graphene-django
|
django
| 758
|
merge_querysets breaks annotations with DjangoFilterConnectionField
|
We recently updated to v2.5 and have found that many of our connections that use `DjangoFilterConnectionField` no longer work as expected (aka the filters aren't applied).
After some digging it looks like any filter that uses annotations are having them stripped off by `DjangoConnectionField.merge_querysets` [here](https://github.com/graphql-python/graphene-django/blob/master/graphene_django/fields.py#L96).
Specifically it looks like PR #693 started this issue because before that `merge_querysets` wouldn't be called in many situations.
Thoughts? Is there a better way to go handling annotations or this is just a bug?
|
closed
|
2019-08-27T17:17:04Z
|
2019-12-26T19:59:14Z
|
https://github.com/graphql-python/graphene-django/issues/758
|
[
"🐛bug"
] |
jarcoal
| 7
|
KaiyangZhou/deep-person-reid
|
computer-vision
| 388
|
提示预训练模型下载不了,怎么解决
|
您好,学校网络原因,这个预训练下载不了,有什么其他链接么,说明书上面的那个model zoo链接也打不开。
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='drive.google.com', port=443): Max retries exceeded with url: /uc?id=1LaG1EJpHrxdAxKnSCJ_i0u-nbxSAeiFY (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x000001FC76D8F8C8>: Failed to est
ablish a new connection: [WinError 10060] 由于连接方在一段时间后没有正确答复或连接的主机没有反应,连接尝试失败。'))
有什么国内的连接嘛,下载。万分感谢
|
closed
|
2020-11-13T01:05:59Z
|
2020-11-29T06:00:25Z
|
https://github.com/KaiyangZhou/deep-person-reid/issues/388
|
[] |
ww5171351
| 2
|
ymcui/Chinese-BERT-wwm
|
tensorflow
| 148
|
请问什么时候可以提供GPT2之类的CLM的预训练语言模型
|
目前中文环境好像一直没有找到一个特别好用的CLM语言模型
|
closed
|
2020-09-24T06:53:09Z
|
2020-10-14T05:04:07Z
|
https://github.com/ymcui/Chinese-BERT-wwm/issues/148
|
[] |
piekey1994
| 1
|
snarfed/granary
|
rest-api
| 128
|
JSONFeed is including a title when it shouldn't be
|
My Microformats->JSONFeed conversion is resulting with a title for one of the posts that shouldn't have had a title. The title and content match in the JSONFeed too. Here's the post permalink: https://aaronparecki.com/2018/01/18/20/
|
closed
|
2018-01-19T13:44:38Z
|
2018-01-19T20:03:12Z
|
https://github.com/snarfed/granary/issues/128
|
[] |
aaronpk
| 2
|
JaidedAI/EasyOCR
|
deep-learning
| 1,024
|
Got NNPack Error (MBP M1 Pro)
|
I encounter this error when I try to use easyocr for the first time:
<img width="1029" alt="image" src="https://github.com/JaidedAI/EasyOCR/assets/6346811/aa6fb3ad-937d-4658-aad1-acad31e72e83">
What should I do to fix this "[W NNPACK.cpp:64] Could not initialize NNPACK! Reason: Unsupported hardware" error?
Note: I created a new conda env with python ver 3.8.16 and install easyocr using pip.
Thanks!
|
open
|
2023-05-21T03:07:37Z
|
2024-05-08T07:19:57Z
|
https://github.com/JaidedAI/EasyOCR/issues/1024
|
[] |
great69
| 3
|
CTFd/CTFd
|
flask
| 2,066
|
Add simpler Javascript access to user's handle and email address
|
We can use this tactic to put in user specific content that we can then access with Javascript. This way we can cache the HTML but also have more user specific content in the UI.
|
closed
|
2022-03-08T20:20:55Z
|
2022-04-08T19:14:58Z
|
https://github.com/CTFd/CTFd/issues/2066
|
[] |
ColdHeat
| 1
|
Buuntu/fastapi-react
|
sqlalchemy
| 207
|
Typescript error 'Router' cannot be used as a JSX component.
|
Failed to compile.
/app/src/index.tsx
TypeScript error in /app/src/index.tsx(8,4):
'Router' cannot be used as a JSX component.
Its instance type 'BrowserRouter' is not a valid JSX element.
The types returned by 'render()' are incompatible between these types.
Type 'React.ReactNode' is not assignable to type 'import("/node_modules/@types/react-transition-group/node_modules/@types/react/ts5.0/index").ReactNode'. TS2786
```
6 |
7 | ReactDOM.render(
> 8 | <Router>
| ^
9 | <App />
10 | </Router>,
11 | document.getElementById('root')
```
|
open
|
2023-08-21T07:26:49Z
|
2024-02-04T08:40:05Z
|
https://github.com/Buuntu/fastapi-react/issues/207
|
[] |
Blaz-Strusnik
| 2
|
mljar/mercury
|
data-visualization
| 333
|
Clear output directory between runs
|
When you run notebook several times and have [`OutputDir`](https://runmercury.com/docs/output-widgets/outputdir/) then files from previous runs will be available in output directory.
|
open
|
2023-07-06T16:03:21Z
|
2023-11-17T11:32:31Z
|
https://github.com/mljar/mercury/issues/333
|
[
"bug",
"good first issue",
"help wanted"
] |
pplonski
| 8
|
huggingface/transformers
|
pytorch
| 36,665
|
Cannot load siglip2 processor
|
### System Info
`processor = AutoProcessor.from_pretrained("google/siglip2-base-patch16-224")` raises
```
File ~/miniforge3/envs/*/lib/python3.10/site-packages/transformers/models/siglip/tokenization_siglip.py:139, in SiglipTokenizer.get_spm_processor(self)
137 def get_spm_processor(self):
138 tokenizer = spm.SentencePieceProcessor(**self.sp_model_kwargs)
--> 139 with open(self.vocab_file, "rb") as f:
140 sp_model = f.read()
141 model_pb2 = import_protobuf()
TypeError: expected str, bytes or os.PathLike object, not NoneType
```
I tried few latest transformers versions but none of them works.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoProcessor, AutoModel
processor = AutoProcessor.from_pretrained("google/siglip2-base-patch16-224")
```
### Expected behavior
```
File ~/miniforge3/envs/*/lib/python3.10/site-packages/transformers/models/siglip/tokenization_siglip.py:139, in SiglipTokenizer.get_spm_processor(self)
137 def get_spm_processor(self):
138 tokenizer = spm.SentencePieceProcessor(**self.sp_model_kwargs)
--> 139 with open(self.vocab_file, "rb") as f:
140 sp_model = f.read()
141 model_pb2 = import_protobuf()
TypeError: expected str, bytes or os.PathLike object, not NoneType
```
|
closed
|
2025-03-12T12:31:00Z
|
2025-03-16T10:49:06Z
|
https://github.com/huggingface/transformers/issues/36665
|
[
"bug"
] |
hello-peiqi
| 5
|
dask/dask
|
pandas
| 11,726
|
⚠️ Upstream CI failed ⚠️
|
[Workflow Run URL](https://github.com/dask/dask/actions/runs/13200030606)
<details><summary>Python 3.12 Test Summary</summary>
```
dask/dataframe/dask_expr/tests/test_collection.py::test_warn_annotations: Failed: DID NOT WARN. No warnings of type (<class 'UserWarning'>,) were emitted.
Emitted warnings: [].
```
</details>
|
closed
|
2025-02-07T07:05:31Z
|
2025-02-10T12:32:51Z
|
https://github.com/dask/dask/issues/11726
|
[
"upstream"
] |
github-actions[bot]
| 0
|
3b1b/manim
|
python
| 1,223
|
Spacing between letters in Text(), TextMobject() and TexMobject()
|
Let's say we have
```
a = TextMobject('1', '2', '3', '4')
selpf.play(Write(a))
```
Output:
```
1234
```
If I want to print it like:
```
1 2 3 4
```
How can I do it?
I have thinked to do it with `TextMobject('1', '2', '3', '4',buff=4)`, with no luck.
|
closed
|
2020-09-05T15:23:41Z
|
2020-09-07T23:15:15Z
|
https://github.com/3b1b/manim/issues/1223
|
[] |
JimChr-R4GN4R
| 2
|
lux-org/lux
|
pandas
| 79
|
Unsupported Category dtype as_type
|
There is a bug when using Lux with [this example](https://datashader.org/getting_started/Pipeline.html) from Datashader.
```python
import pandas as pd
import numpy as np
from collections import OrderedDict as odict
num=10000
np.random.seed(1)
dists = {cat: pd.DataFrame(odict([('x',np.random.normal(x,s,num)),
('y',np.random.normal(y,s,num)),
('val',val),
('cat',cat)]))
for x, y, s, val, cat in
[( 2, 2, 0.03, 10, "d1"),
( 2, -2, 0.10, 20, "d2"),
( -2, -2, 0.50, 30, "d3"),
( -2, 2, 1.00, 40, "d4"),
( 0, 0, 3.00, 50, "d5")] }
df = pd.concat(dists,ignore_index=True)
df["cat"]=df["cat"].astype("category") # If commented, the df.intent=vis line doesn't break
df #Select and export the scatterplot vis (big circular blob)
vis = df.exported[0]
df.intent = vis
df # This errors on the key 'cat' which is a categorical
```

We should look into supporting [Categorical](https://pandas.pydata.org/pandas-docs/stable/user_guide/categorical.html) data types in Pandas. More importantly, we should look into whether bugs show up when we perform `astype` operations for other data types.
|
closed
|
2020-08-27T14:14:51Z
|
2020-10-03T08:56:11Z
|
https://github.com/lux-org/lux/issues/79
|
[
"bug",
"help wanted"
] |
dorisjlee
| 3
|
amisadmin/fastapi-amis-admin
|
fastapi
| 123
|
how to use behind a proxy with stripped path prefix
|
我在服务前面用到了一层nginx正向代理,正向代理中指定了一个path prefix路由到这个项目,并在nginx中用rewrite去掉了这个前缀,例如浏览器请求`domain.com/prefix/admin`,实际请求的就是`/admin`。
但项目生成的前端链接中不包含`/prefix`前缀,例如`domain.com/admin/page`,导致前端无法访问正确的后端地址`domain.com/prefix/admin/page`。
这个问题要如何解决?
我尝试过在admin的fastapi app中添加`root_path`,没有效果。
---
I used an nginx reverse proxy in front of my service. In this reverse proxy, I specified a path prefix that routes to the project, and in nginx, I used a rewrite to remove this prefix. For example, when a browser requests `domain.com/prefix/admin`, it actually requests the `/admin` path.
However, the frontend links generated by the project do not include the `/prefix` prefix. For example, `domain.com/admin/page`. This causes the frontend to be unable to access the correct backend address `domain.com/prefix/admin/page`.
How can I solve this problem? I've tried adding a `root_path` in the FastAPI app for admin, but it didn't work.
|
open
|
2023-09-19T13:44:46Z
|
2023-09-20T02:31:24Z
|
https://github.com/amisadmin/fastapi-amis-admin/issues/123
|
[] |
LoadingZhang
| 1
|
neuml/txtai
|
nlp
| 446
|
Add extractor reference output format
|
Add a new output format named 'reference'. When this is set, the extractor pipeline will add an additional field to the outputs called reference. The reference is the context record that best matches an answer.
This is makes it possible to cite answer sources.
|
closed
|
2023-03-02T22:27:06Z
|
2023-03-02T22:36:09Z
|
https://github.com/neuml/txtai/issues/446
|
[] |
davidmezzetti
| 0
|
dfki-ric/pytransform3d
|
matplotlib
| 7
|
Publish pytransform on PyPI
|
There is already a package called pytransform: https://pypi.org/project/pytransform/, so we should rename this package. Possible names:
* pytransform3d
* transform3d
(https://packaging.python.org/tutorials/packaging-projects/)
|
closed
|
2018-08-23T10:10:25Z
|
2018-12-07T14:54:05Z
|
https://github.com/dfki-ric/pytransform3d/issues/7
|
[] |
AlexanderFabisch
| 2
|
microsoft/nni
|
machine-learning
| 5,332
|
Will a tuner takes intermediate results into account?
|
**Describe the issue**:
Recently I am trying early stopping algorithms built in NNI. However, I find ```report_final_result()``` seems not to work when a trial is early stopped, and the best value of intermediate results is not displayed in the web portal (Trial Details→Default metric) where final results of **succeeded** trials are shown.
As the tuner works based on historical **final results**, I want to ask:
1. Will the assessor report the "final result" which are inferred from already received immediate results (instead of getting it through ```report_final_result()``` conventionally) ?
2. How does the tuner take advantages of those early-stopped (as well as user cancelled ones) trials?
|
open
|
2023-02-03T02:58:17Z
|
2023-02-07T08:23:11Z
|
https://github.com/microsoft/nni/issues/5332
|
[
"feature request"
] |
zzzzzx-1115
| 2
|
errbotio/errbot
|
automation
| 1,243
|
Format output
|
In order to let us help you better, please fill out the following fields as best you can:
### I am...
* [ ] Reporting a bug
* [ ] Suggesting a new feature
* [ ] Requesting help with running my bot
* [ ] Requesting help writing plugins
* [ *] Here about something else
### I am running...
* Errbot version: 5.2.0
* OS version: LinuxMint 19 tara
* Python version: 3+
* Using a virtual environment: no
### Issue description
I am wishing to output a returned json to a table or prettier format
in general i have lots of bot commands running and generally have hacked the output to be pretty or code...
I am looking to output similar to those in the !render test (tables, bold, colours etc) or !help table
I am looking for how to achieve this and cant find documention to help!!!
is there a direction i could be guided to?
|
closed
|
2018-07-30T16:14:31Z
|
2018-11-09T15:45:08Z
|
https://github.com/errbotio/errbot/issues/1243
|
[] |
Stevieag
| 2
|
MaartenGr/BERTopic
|
nlp
| 1,231
|
python3.11 run quickstart , numba 0.59.0 deprecation decorator warning
|
NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
@numba.jit()
|
closed
|
2023-05-04T05:18:48Z
|
2023-09-27T09:06:30Z
|
https://github.com/MaartenGr/BERTopic/issues/1231
|
[] |
desean
| 3
|
plotly/jupyter-dash
|
jupyter
| 73
|
Using on SageMaker Studio
|
I'd like to use this on AWS Sagemaker Studio, but I think Sagemaker Studio is JupyterLab 1.
Has anyone been able to get either the inline or jupyterlab modes working on Sagemaker Studio?
|
closed
|
2022-01-05T03:04:35Z
|
2024-05-23T17:18:39Z
|
https://github.com/plotly/jupyter-dash/issues/73
|
[] |
tonyreina
| 1
|
3b1b/manim
|
python
| 1,318
|
For 3d vectors, tip is not getting oriented in proper direction.
|
I'm having a problem with creating 3D vectors. the tips are not getting oriented in correct direction. so it looks like the tip of the vector is broken. I'm using the above code. I've just started learning manim. so can anybody help me with this. I tried issue#774 but it didn't work for me.
I'm using this code :
```
from manimlib.imports import *
class vectors(VectorScene,ThreeDScene):
def construct(self):
v1=Vector(np.array([1,0,0]),color=RED)
v2=Vector(np.array([0,1,0]),color=BLUE)
v3=Vector(np.array([0,0,1]),color=GREEN)
v4=Vector(np.array([1,1,1]),color=YELLOW)
vec=VGroup(v1,v2,v3,v4)
self.set_camera_orientation(phi=PI/3,theta=-PI/2,distance=6)
self.begin_ambient_camera_rotation(0.6)
self.play(ShowCreation(vec))
self.wait(10)
```
|
open
|
2021-01-20T21:13:01Z
|
2021-01-20T21:13:01Z
|
https://github.com/3b1b/manim/issues/1318
|
[] |
MK137
| 0
|
the0demiurge/ShadowSocksShare
|
flask
| 78
|
ubuntu使用ss上网
|
以前是设置好manual- socks host
然后命令 sslocal -s 账号-p 8097 -k "密码" -b 127.0.0.1 -l 1080
就可以翻墙
操作很简单
请问这是怎样操作的呢?忘了。。。。
求教
|
closed
|
2019-09-06T03:55:26Z
|
2019-12-23T06:19:22Z
|
https://github.com/the0demiurge/ShadowSocksShare/issues/78
|
[] |
henbucuoshanghai
| 6
|
Miserlou/Zappa
|
django
| 1,477
|
How can I deploy project with dlib by Zappa?
|
## Context
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 2.7/3.6 -->
I am using Python 2.7 with virtualenv with below requirements.txt.
```
dlib
flask
opencv-python
zappa
```
so the installation commands are something like below:
```
virtualenv venv
. venv/bin/activate
pip install -r requirements.txt
zappa status
zappa update
```
## Expected Behavior
I expect `import dlib` successes, since dlib is included in the [lambda-packages](https://github.com/Miserlou/lambda-packages).
## Actual Behavior
<!--- Tell us what happens instead -->
But it fails with below error in `zappa tail`
```
[1523465915797] Failed to find library...right filename?
[1523465916752] libjpeg.so.8: cannot open shared object file: No such file or directory: ImportError
Traceback (most recent call last):
File "/var/task/handler.py", line 509, in lambda_handler
return LambdaHandler.lambda_handler(event, context)
File "/var/task/handler.py", line 237, in lambda_handler
handler = cls()
File "/var/task/handler.py", line 129, in __init__
self.app_module = importlib.import_module(self.settings.APP_MODULE)
File "/usr/lib64/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/tmp/face-detection/app.py", line 6, in <module>
import dlib
ImportError: libjpeg.so.8: cannot open shared object file: No such file or directory
```
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1. The link is https://1neq3ilpbk.execute-api.ap-northeast-1.amazonaws.com/dev
2.
3.
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.45.1
* Operating System and Python version: Ubuntu 16.04, Python 2.7
* The output of `pip freeze`:
```
% pip freeze
argcomplete==1.9.2
base58==0.2.4
boto3==1.7.4
botocore==1.10.4
certifi==2018.1.18
cfn-flip==1.0.3
chardet==3.0.4
click==6.7
dlib==19.10.0
docutils==0.14
durationpy==0.5
Flask==0.12.2
future==0.16.0
futures==3.1.1
hjson==3.0.1
idna==2.6
itsdangerous==0.24
Jinja2==2.10
jmespath==0.9.3
kappa==0.6.0
lambda-packages==0.19.0
MarkupSafe==1.0
numpy==1.14.2
opencv-python==3.4.0.12
placebo==0.8.1
python-dateutil==2.6.1
python-slugify==1.2.4
PyYAML==3.12
requests==2.18.4
s3transfer==0.1.13
six==1.11.0
toml==0.9.4
tqdm==4.19.1
troposphere==2.2.1
Unidecode==1.0.22
urllib3==1.22
Werkzeug==0.12
wsgi-request-logger==0.4.6
zappa==0.45.1
```
* Link to your project (optional):
* Your `zappa_settings.py`:
```
% cat zappa_settings.json
{
"dev": {
"app_function": "app.app",
"aws_region": "ap-northeast-1",
"profile_name": "default",
"project_name": "face-detection",
"runtime": "python2.7",
"s3_bucket": "zappa-lynx3c9vr",
"slim_handler": true
}
```
|
open
|
2018-04-11T17:05:11Z
|
2018-04-11T19:24:49Z
|
https://github.com/Miserlou/Zappa/issues/1477
|
[] |
wkentaro
| 1
|
suitenumerique/docs
|
django
| 397
|
Reorder visibility options in dropdown menu
|
## Bug Report
**Problematic behavior**

The ordering of the visibility options is unintuitive as it's not gradual.
**Expected behavior/code**
Ordering options like following would make more sense
- Restricted
- Authenticated
- Public
|
closed
|
2024-10-29T13:17:12Z
|
2025-03-10T08:09:15Z
|
https://github.com/suitenumerique/docs/issues/397
|
[
"frontend"
] |
virgile-dev
| 1
|
pallets-eco/flask-sqlalchemy
|
sqlalchemy
| 739
|
create_engine() missing 1 required positional argument: 'engine_opts
|
I'm trying to use `db.create_engine` to connect to a second database in a view. My app was working yesterday, but after I tried it on a new machine with a fresh virtualenv, it's no longer working. I think this is due to the changes in #684.
```python
x = 'postgres://*****'
engine = db.create_engine(x)
```
```pytb
create_engine() missing 1 required positional argument: 'engine_opts'
```
|
closed
|
2019-05-19T19:26:53Z
|
2020-12-05T20:21:51Z
|
https://github.com/pallets-eco/flask-sqlalchemy/issues/739
|
[] |
jjRick
| 5
|
man-group/arctic
|
pandas
| 301
|
noisy warning message NB treating all values as 'exists' - no longer sparse
|
#### Arctic Version
```
v1.35.0
```
#### Arctic Store
```
TickStore
```
#### Platform and version
Not related
#### Description of problem and/or code sample that reproduces the issue
```python
logging.basicConfig(level=logging.INFO)
# Import tick by tick
for i in loops:
df = pd.DataFrame(
{"price": [price]}, index=[datetime.datetime.now(mktz("UTC"))]
)
tick_library.write("symbol1", df)
...
# Compress daily ticks by rewriting
date_range = DateRange("2016-12-09 08:00", 2016-12-10 08:00, CLOSED_OPEN)
df_backup = tick_library.read("symbol1", date_range)
tick_library.delete("symbol1", date_range)
tick_library.write("symbol1", df_backup)
```
The way how I use the tick library has been described in https://github.com/manahl/arctic/issues/192 , and I have to compress ticks every few hours before my disk was empty.
During the rewrite step,
```
tick_library.write("symbol1", df_backup)
```
A warning
```
2016-12-09 17:20:30,768 [WARNING] [arctic.tickstore.tickstore] NB treating all values as 'exists' - no longer sparse
```
will raise constantly, which is quite noisy. This warning will raise if the second argument in `tick_library.write` is a `pandas.DataFrame`, even my df is not spare at all . I think it's better to warn just once by using the Python `warning` module.
BTW, is there any better way to compress existed data instead of doing **read-delete-write** manually?
Thanks.
|
closed
|
2016-12-09T09:27:37Z
|
2017-04-17T16:35:41Z
|
https://github.com/man-group/arctic/issues/301
|
[] |
mckelvin
| 8
|
stanfordnlp/stanza
|
nlp
| 801
|
Strange differences between stanza demo and script
|
Hi,
I found some strange difference between stanza demo(http://stanza.run/) and output of my script while comparing both.
1)**_Splitting a single word as Two_**
one example : "37 East, Exit 1A will be closed for bridge work, Tues. night, 9:30 p.m.-5:30 a.m."
Here 1A is splitting as "1" and "A" .Then output is like:
{
"id": 5,
"text": "1",
"lemma": "1",
"upos": "NUM",
"xpos": "CD",
"feats": "NumForm=Digit|NumType=Card",
"head": 4,
"deprel": "dep",
"start_char": 14,
"end_char": 15
}
{
"id": 1,
"text": "A",
"lemma": "a",
"upos": "DET",
"xpos": "DT",
"feats": "Definite=Ind|PronType=Art",
"head": 4,
"deprel": "nsubj:pass",
"start_char": 15,
"end_char": 16
}
But we cannot find this splitting in Stanza demo.
2) **_Relationship between words_**
37 For example "East, Exit 1A will be closed for bridge work, Tues. night, 9:30 p.m.-5:30 a.m" when running this sentence we can see that "Exit" and "closed" has no relation. But if we are running the same sentence in stanza demo both have "nsubj:pass" relation.
Do you guys have any idea about these differences?
_print(*[f'id: {word.id}\tword: {word.text}\thead id: {word.head}\thead: {sent.words[word.head-1].text if word.head > 0 else "root"}\tdeprel: {word.deprel}' for sent in doc.sentences for word in sent.words], sep='\n')_ I'm using this function in my script.
|
closed
|
2021-09-08T00:06:40Z
|
2021-11-14T19:33:56Z
|
https://github.com/stanfordnlp/stanza/issues/801
|
[
"question",
"stale"
] |
sangeethsn
| 7
|
tflearn/tflearn
|
data-science
| 352
|
image_preloader() bug
|
######
data_utils.py:501
l = l.strip('\n').split()
if not files_extension or any(flag in l(0) for flag in files_extension):
######
Variable "l" is a list object, so you have to change the l(0) to l[0].
Thank you to provide novel library. :-)
|
open
|
2016-09-19T08:13:14Z
|
2016-09-19T19:55:24Z
|
https://github.com/tflearn/tflearn/issues/352
|
[] |
yjn870
| 1
|
PrefectHQ/prefect
|
data-science
| 16,939
|
Prefect run runtime not stopped when run is canceled
|
### Bug summary
Hey!
We run our tasks on `fargate` in `ecs`.
Every once in a while our tasks silently fail, they are then marked as `Running` in prefect cloud UI, but the run is reporting close to no logs both in `Cloudwatch` or Prefect cloud Ui.
The only logs we see are:
```
EVENTS 1738079190030 15:46:25.912 | DEBUG | prefect.runner - Checking for cancelled flow runs... 1738079185912
EVENTS 1738079200025 15:46:37.091 | DEBUG | prefect.utilities.services.critical_service_loop - Starting run of functools.partial(<bound method Runner._check_for_cancelled_flow_runs of Runner(name='runner-ce1a9cc9-ebd1-447f-8f86-69c42bba3e61')>, should_stop=<function Runner.execute_flow_run.<locals>.<lambda> at 0x7f4edc7e6480>, on_stop=<bound method CancelScope.cancel of <anyio._backends._asyncio.CancelScope object at 0x7f4edaad3a90>>) 1738079197092
```
This is repeating every 10 seconds.
When we then cancel the flow run from the prefect cloud ui, the logs report:
```
EVENTS 1738079200025 15:46:37.545 | INFO | prefect.runner - Found 1 flow runs awaiting cancellation. 1738079197545
EVENTS 1738079200025 15:46:37.545 | WARNING | prefect.runner - Unable to kill process 17: The process was not found. Marking flow run as cancelled. 1738079197546
EVENTS 1738079200025 15:46:37.697 | DEBUG | Flow run 'secret-jackrabbit' - Running 1 deployment pull steps 1738079197698
EVENTS 1738079200025 15:46:37.723 | DEBUG | prefect.client - Connecting to API at https://api.prefect.cloud/api/accounts/.... 1738079197723
EVENTS 1738079200025 15:46:37.747 | DEBUG | prefect.client - Connecting to API at https://api.prefect.cloud/api/accounts/.... 1738079197748
EVENTS 1738079200025 15:46:37.749 | DEBUG | Flow run 'secret-jackrabbit' - Changing working directory to '/opt/prefect' 1738079197750
EVENTS 1738079200025 15:46:37.750 | DEBUG | Flow run 'secret-jackrabbit' - Importing flow code from 'flowlib/prefect/flow.py:load_and_transform' 1738079197751
EVENTS 1738079210023 15:46:46.662 | DEBUG | prefect.utilities.services.critical_service_loop - Starting run of functools.partial(<bound method Runner._check_for_cancelled_flow_runs of Runner(name='runner-ce1a9cc9-ebd1-447f-8f86-69c42bba3e61')>, should_stop=<function Runner.execute_flow_run.<locals>.<lambda> at 0x7f4edc7e6480>, on_stop=<bound method CancelScope.cancel of <anyio._backends._asyncio.CancelScope object at 0x7f4edaad3a90>>) 1738079206663
EVENTS 1738079210023 15:46:46.662 | DEBUG | prefect.runner - Checking for cancelled flow runs... 1738079206663
```
The run is then marked as cancelled in prefect cloud ui, but the ecs-tasks is still alive, and continues logging:
```
EVENTS 1738079210023 15:46:46.662 | DEBUG | prefect.utilities.services.critical_service_loop - Starting run of functools.partial(<bound method Runner._check_for_cancelled_flow_runs of Runner(name='runner-ce1a9cc9-ebd1-447f-8f86-69c42bba3e61')>, should_stop=<function Runner.execute_flow_run.<locals>.<lambda> at 0x7f4edc7e6480>, on_stop=<bound method CancelScope.cancel of <anyio._backends._asyncio.CancelScope object at 0x7f4edaad3a90>>) 1738079206663
EVENTS 1738079210023 15:46:46.662 | DEBUG | prefect.runner - Checking for cancelled flow runs... 1738079206663
```
It looks like the ecs-task keeps on running https://github.com/PrefectHQ/prefect/blob/26ae72909896078d4436e4b7e90075d586347f53/src/prefect/runner/runner.py#L803-L807 perhaps, and this loop is never ended.
We have so far only seen this in tasks using quite a bit of resources, but not given enough resources. For instance, when we have a task needing 14gb memory, but we only spec it to get 8gb. This job is also creating quite a bit of threads / prefect-tasks.
I have unfortunately not managed to reproduce this issue in a more controlled manner, but based on `EVENTS 1738079200025 15:46:37.545 | WARNING | prefect.runner - Unable to kill process 17: The process was not found. Marking flow run as cancelled. 1738079197546`, it looks a little like that the process the `prefect.runner`-loop is responsible for killing on a `cancel`-request is already gone, but that nothing has caught this? How this process has silently died, is not reported anywhere I have managed to see. But as mentioned, I'm guessing the high memory/thread-usage is involved. The flows we see this issue in, is running [`dlt`](https://dlthub.com/)-jobs.
Have you seen an issue like this before? If so, any suggestions on how to avoid this?
If we had another `critical_service_loop`, which is responsible for checking if the process is actually still running, and killing all the other loops if we have no more flow-runs running, could maybe be a way of handling this. The current state of things a little unfortunate for us, as we have to monitor for long-running ecs-tasks, and if the task has been running for longer than X, it might be a case of this type of run, and we manually have to kill the ecs-task
### Version info
```Text
Tested with 2.20.2 and 3.1.14
Version: 2.20.2
API version: 0.8.4
Python version: 3.11.9
Git commit: 51c3f290
Built: Wed, Aug 14, 2024 11:27 AM
OS/Arch: linux/x86_64
Profile: dope
Server type: cloud
Version: 3.1.14
API version: 0.8.4
Python version: 3.11.9
Git commit: 5f1ebb57
Built: Thu, Jan 23, 2025 1:22 PM
OS/Arch: linux/x86_64
Profile: ephemeral
Server type: ephemeral
Pydantic version: 2.9.2
Server:
Database: sqlite
SQLite version: 3.37.2
Integrations:
prefect-docker: 0.6.2
```
### Additional context
_No response_
|
open
|
2025-02-03T11:45:50Z
|
2025-02-03T11:45:50Z
|
https://github.com/PrefectHQ/prefect/issues/16939
|
[
"bug"
] |
mch-sb
| 0
|
unionai-oss/pandera
|
pandas
| 1,142
|
Pandera 0.14+ check_types behavior change?
|
When upgrading our library to Pandera > 0.14 the `check_types` decorator doesn't seem to be checking DataFrames in the same way. It may be by design for some of the changes in 0.14, but it seems like a bug.
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the latest version of pandera.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandera.
**Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug.
#### Code Sample, a copy-pastable example
```python
from typing import Tuple
import pandera as pa
from pandera.typing import DataFrame, Series
class TestModel(pa.DataFrameModel):
my_column: Series[int]
@pa.check_types
def func_to_test(df: DataFrame[TestModel], val: int) -> Tuple[str, str]:
return type(df), type(val)
func_to_test(df="not_a_dataframe", val=5)
```
#### Expected behavior
I would have expected passing "not_a_dataframe" in as a string for `df` in my test function to raise a validation error for `df`. However, now the function works without the validation happening. This occurred as expected in my tests for `pandera<0.14` but when trying to upgrade it has an issue.
In `pandera==0.13.4` the error raised was `AttributeError: 'str' object has no attribute 'pandera'` which is kind of a hidden bug that was previously being raised. Note that for the 0.13.4 test you also have to change the `TestModel` definition to be `pa.SchemaModel` instead of a `DataFrameModel`.
#### Desktop (please complete the following information):
- OS: macOS 13.2
- pandera 0.14.5
|
open
|
2023-03-24T17:36:10Z
|
2023-03-24T17:52:10Z
|
https://github.com/unionai-oss/pandera/issues/1142
|
[
"bug"
] |
kr-hansen
| 1
|
PaddlePaddle/PaddleHub
|
nlp
| 1,610
|
Paddlehub加载训练好的模型,对于同一张图片,每次predict的结果都不同
|
- 版本、环境信息
1)PaddleHub和PaddlePaddle版本:PaddleHub1.8.2,PaddlePaddle1.8.5
2)系统环境:CentOS 7.6.1810, python 3.7.9
使用如下代码,加载训练好的分类模型,对于同一张图片,每次结果都不同。自动下载的预训练模型与训练时是相同的
# coding:utf-8
import base64
import cv2
import numpy as np
import paddlehub as hub
from paddlehub.dataset.base_cv_dataset import BaseCVDataset
module_eff = "efficientnetb7_imagenet"
checkpoint_dir = "./hdstamp_v2"
dataset_dir = "./data"
test_list_file = "test.txt"
label_list_file = "labels.txt"
use_gpu = False
class DemoDataset(BaseCVDataset):
def __init__(self):
self.dataset_dir = dataset_dir
super(DemoDataset, self).__init__(
base_path=self.dataset_dir,
test_list_file=test_list_file,
label_list_file=label_list_file)
def base64_to_nparr(img):
img_data = base64.b64decode(img)
nparr = np.fromstring(img_data, np.uint8)
img_np = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
return img_np
def module_initialization():
module = hub.Module(name=module_eff)
input_dict, output_dict, program = module.context(trainable=False)
dataset = DemoDataset()
data_reader = hub.reader.ImageClassificationReader(
image_width=module.get_expected_image_width(),
image_height=module.get_expected_image_height(),
images_mean=module.get_pretrained_images_mean(),
images_std=module.get_pretrained_images_std(),
dataset=dataset)
feature_map = output_dict["feature_map"]
feed_list = [input_dict["image"].name]
# Setup RunConfig for PaddleHub Fine-tune API
hub_config = hub.RunConfig(
use_data_parallel=False,
use_cuda=use_gpu,
checkpoint_dir=checkpoint_dir)
task = hub.ImageClassifierTask(
data_reader=data_reader,
feed_list=feed_list,
feature=feature_map,
num_classes=dataset.num_labels,
config=hub_config)
return task
def process(img64, task):
data = []
img_np = base64_to_nparr(img64)
data.append(img_np)
res = task.predict(data=data, return_result=True, accelerate_mode=False)
return res
|
open
|
2021-09-03T08:26:20Z
|
2021-11-02T13:42:01Z
|
https://github.com/PaddlePaddle/PaddleHub/issues/1610
|
[
"cv"
] |
MichaelLiu-TJ
| 5
|
jina-ai/clip-as-service
|
pytorch
| 951
|
Support open-clip-torch 2.24.0
|
Currently building module clip-server fails with:
```
> Checking runtime dependencies for clip_server-0.8.3-py3-none-any.whl
> - docarray==0.21.0 not satisfied by version 0.40.0
> - open-clip-torch<2.9.0,>=2.8.0 not satisfied by version 2.24.0
```
Please support latest open-clip-torch version 2.24.0 https://github.com/mlfoundations/open_clip/releases/tag/v2.24.0
Best regards
Jonas
|
open
|
2024-06-04T21:49:42Z
|
2024-06-10T01:05:46Z
|
https://github.com/jina-ai/clip-as-service/issues/951
|
[] |
onny
| 1
|
OpenBB-finance/OpenBB
|
python
| 6,860
|
[🕹️] Social Media Poster - Meme
|
### What side quest or challenge are you solving?
Side Quest: Create a Social Media Post Highlighting OpenBB’s Customizability
### Points
50
### Description
I've shared a Meme on X
### Provide proof that you've completed the task

Hope I can bring a smile on your face :)
|
closed
|
2024-10-24T08:08:00Z
|
2024-10-28T07:23:57Z
|
https://github.com/OpenBB-finance/OpenBB/issues/6860
|
[] |
Khaan25
| 2
|
cobrateam/splinter
|
automation
| 1,050
|
Clicking a checkbox should not submit the form
|
Using a Django client, checkboxes are considered to be LxmlControlElements, where `click` then will submit the form:
https://github.com//cobrateam/splinter/blob/986ce0a10c52f08196b32b91f752182cb7517892/splinter/driver/lxmldriver.py#L421-L433
I've used `browser.find_by_id("id_foo").first.click()`.
The workaround is using `browser.check("foo")`.
|
open
|
2022-06-16T09:19:12Z
|
2022-06-28T22:05:09Z
|
https://github.com/cobrateam/splinter/issues/1050
|
[
"bug",
"django"
] |
blueyed
| 1
|
developmentseed/lonboard
|
data-visualization
| 514
|
Google Colab crashes for large datasets
|
Hi,
I would like to use a 'lonboard' in Google Colab to visualize a GeoDataFrame ('gdf'). The code runs without any errors, but no output is displayed, even though it works on my local PC. What could be the issue?
|
closed
|
2024-05-12T18:09:52Z
|
2024-09-24T19:33:52Z
|
https://github.com/developmentseed/lonboard/issues/514
|
[] |
adamekcerv
| 5
|
geopandas/geopandas
|
pandas
| 3,330
|
BUG: Buffering a Polygon geometry returns a MultiPolygon
|
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of geopandas.
- [ ] (optional) I have confirmed this bug exists on the main branch of geopandas.
---
#### Code Sample, a copy-pastable example
```python
import geopandas as gpd
from shapely.geometry import Polygon
random_island = Polygon(
[
(120.15542550000002, 26.783172500000003),
(120.15533160000001, 26.783220300000004),
(120.15528600000002, 26.783173600000005),
(120.15525120000001, 26.783138900000004),
(120.15518410000001, 26.783130500000002),
(120.15511570000001, 26.783208400000003),
(120.15507550000001, 26.783261000000003),
(120.15511170000002, 26.783349600000005),
(120.15516940000002, 26.783401100000003),
(120.15522700000001, 26.783372400000005),
(120.15526810000001, 26.783418700000002),
(120.15530000000001, 26.7834831),
(120.15538230000001, 26.783491),
(120.15543360000001, 26.783524400000005),
(120.15554880000002, 26.783535500000003),
(120.15557710000002, 26.783449000000005),
(120.15555950000001, 26.783384800000004),
(120.15557070000001, 26.783329000000002),
(120.15553720000001, 26.783246100000003),
(120.15554620000002, 26.783201200000004),
(120.15549590000002, 26.783136600000002),
(120.15542550000002, 26.783172500000003),
]
)
gdf_random_island = gpd.GeoDataFrame(geometry=[random_island], crs=4326).to_crs(32651)
gdf_random_island.geometry = gdf_random_island.geometry.buffer(100_000)
print(gdf_random_island.geom_type)
print(gdf_random_island.explode().area.values)
>>> 0 MultiPolygon
>>> dtype: object
>>> [1.02001815e-02 3.13813375e+10]
```
#### Problem description
Buffering certain geometries of `Polygon` type produces weird `MultiPolygon`, instead of a single, buffered `Polygon`, where a `MultiPolygon` is made up of a really small, "sliver-like" `Polygon` and a properly "buffered" one.
#### Expected Output
a `GeoDataFrame`, with a single `Polygon` `geometry` type, without the "sliver-like" `Polygon`
#### Output of ``geopandas.show_versions()``
<details>
SYSTEM INFO
-----------
python : 3.12.3 | packaged by conda-forge | (main, Apr 15 2024, 18:35:20) [Clang 16.0.6 ]
executable : /Users/bogdan/mambaforge/envs/test/bin/python
machine : macOS-14.5-arm64-arm-64bit
GEOS, GDAL, PROJ INFO
---------------------
GEOS : 3.12.1
GEOS lib : None
GDAL : 3.8.5
GDAL data dir: /Users/bogdan/mambaforge/envs/test/share/gdal
PROJ : 9.4.0
PROJ data dir: /Users/bogdan/mambaforge/envs/test/share/proj
PYTHON DEPENDENCIES
-------------------
geopandas : 0.14.4
numpy : 1.26.4
pandas : 2.2.2
pyproj : 3.6.1
shapely : 2.0.4
fiona : 1.9.6
geoalchemy2: None
geopy : 2.4.1
matplotlib : 3.8.4
mapclassify: 2.6.1
pygeos : None
pyogrio : None
psycopg2 : 2.9.9 (dt dec pq3 ext lo64)
pyarrow : 16.1.0
rtree : 1.2.0
</details>
|
open
|
2024-06-06T17:57:04Z
|
2024-06-08T06:10:35Z
|
https://github.com/geopandas/geopandas/issues/3330
|
[
"bug",
"upstream issue"
] |
bocalml
| 3
|
smarie/python-pytest-cases
|
pytest
| 97
|
`lazy_value` : build a nicer id when the function is a partial
|
See https://stackoverflow.com/a/62348105/7262247
|
closed
|
2020-06-12T16:05:47Z
|
2020-06-23T14:23:50Z
|
https://github.com/smarie/python-pytest-cases/issues/97
|
[
"enhancement"
] |
smarie
| 0
|
matplotlib/mplfinance
|
matplotlib
| 686
|
Bug Report:
|
Bug when using box_size = 'atr' in pnf charts in mplfinance : FutureWarning: Series.__getitem__ treating keys as positions is deprecated
OS Win 11 Pro 24H2 Desktop
IDE Visual Studio Code 1.96.2
Python 3.12.8
mplfinance 0.12.10b0
When using the 'atr' box_size parameter in creating pnf charts in mplfinance the `_utils.py script` throws this warning
\mplfinance\_utils.py:129: FutureWarning: Series.__getitem__ treating keys as positions is deprecated. In a future version, integer keys will always be treated as labels (consistent with DataFrame behavior). To access a value by position, use `ser.iloc[pos]` high = highs[i]
\mplfinance\_utils.py:130: FutureWarning: Series.__getitem__ treating keys as positions is deprecated. In a future version, integer keys will always be treated as labels (consistent with DataFrame behavior). To access a value by position, use `ser.iloc[pos]` low = lows[i]
\_utils.py:131: FutureWarning: Series.__getitem__ treating keys as positions is deprecated. In a future version, integer keys will always be treated as labels (consistent with DataFrame behavior). To access a value by position, use `ser.iloc[pos]` close_prev = closes[i-1]
This is the responsible function:
```
def _calculate_atr(atr_length, highs, lows, closes):
"""Calculate the average true range
atr_length : time period to calculate over
all_highs : list of highs
all_lows : list of lows
all_closes : list of closes
"""
if atr_length < 1:
raise ValueError("Specified atr_length may not be less than 1")
elif atr_length >= len(closes):
raise ValueError("Specified atr_length is larger than the length of the dataset: " + str(len(closes)))
atr = 0
for i in range(len(highs)-atr_length, len(highs)):
high = highs[i]
low = lows[i]
close_prev = closes[i-1]
tr = max(abs(high-low), abs(high-close_prev), abs(low-close_prev))
atr += tr
return atr/atr_length
```
This warning is suppressed when you modify the code like so
```
high = highs.iloc[i]
low = lows.iloc[i]
close_prev = closes.iloc[i-1]
```
This however creates a new problem if you subsequently run a Renko chart, you get this error
LocalCache\local-packages\Python312\site-pa
high = highs.iloc[i]
^^^^^^^^^^
AttributeError: 'numpy.ndarray' object has no attribute 'iloc'
To fix this I created a small helper function
```
def safe_indexing(data, index):
if isinstance(data, pd.Series):
return data.iloc[index]
else:
return data[index]
```
and placed it in _utils.py
and updated _utils.py
```
high = highs[i]
low = lows[i]
close_prev = closes[i-1]
```
to
```
high = safe_indexing(highs, i)
low = safe_indexing(lows, i)
close_prev = safe_indexing(closes, i-1)
```
Now both charts render with no messages
|
open
|
2024-12-22T02:39:56Z
|
2024-12-25T21:49:07Z
|
https://github.com/matplotlib/mplfinance/issues/686
|
[
"bug"
] |
zbig01
| 2
|
huggingface/diffusers
|
deep-learning
| 11,145
|
Flux FIll lora / dreambooth finetune support
|
I would like to ask if there are plans to support LoRA fine-tuning for Flux Fill (inpainting)? I see that the community has already supported related code, and I would like to ask if this can be directly integrated? Thank you
https://github.com/huggingface/diffusers/blob/7a350cc8da60ee67f12a5bf5d3b3fcb4c06ffdb0/examples/research_projects/dreambooth_inpaint/train_dreambooth_inpaint_lora_flux.py
|
open
|
2025-03-24T03:32:35Z
|
2025-03-24T03:32:35Z
|
https://github.com/huggingface/diffusers/issues/11145
|
[] |
sanbuphy
| 0
|
fastapi/sqlmodel
|
sqlalchemy
| 40
|
How to add sqlalchemy functional indexes to columns?
|
### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
# How to do the SQLModel equivalent of:
from sqlalchemy import func, Index
Index('someindex', func.lower(mytable.c.somecol))
```
### Description
How to add [functional indexes ](https://docs.sqlalchemy.org/en/14/core/constraints.html#functional-indexes)to a table?
### Operating System
Linux
### Operating System Details
_No response_
### SQLModel Version
0.0.3
### Python Version
3.8.11
### Additional Context
_No response_
|
closed
|
2021-08-27T08:04:00Z
|
2021-08-28T13:25:00Z
|
https://github.com/fastapi/sqlmodel/issues/40
|
[
"question"
] |
gregsifr
| 1
|
vllm-project/vllm
|
pytorch
| 15,395
|
[Feature]: Add Warning for Chat Template Mismatches similar to SGLang
|
I'm requesting a feature to add warnings when users supply a chat template that differs from the official template for a particular model. Currently, vLLM simply acknowledges the supplied template without alerting users to potential performance issues.
**Current Behavior:**
When using a wrong chat template with vLLM, it only logs:
```
Using supplied chat template:
{% for message in messages %}{% if message.role == 'user' %}{{ message.content }}{% endif %}{% endfor %}
```
**Requested Behavior:**
While SGLang provides this helpful warning:
```
Using a chat_template: 'None', which is different from official chat template: 'llama-3-instruct', This discrepancy may lead to performance degradation.
```
I think that would improve the user experience by making potential issues more visible before they cause problems in production.
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
open
|
2025-03-24T12:30:10Z
|
2025-03-24T16:16:04Z
|
https://github.com/vllm-project/vllm/issues/15395
|
[
"good first issue",
"feature request"
] |
xihajun
| 1
|
MaartenGr/BERTopic
|
nlp
| 2,170
|
datamapplot is not defined
|
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Desribe the bug
When I try to use
`fig = topic_model.visualize_document_datamap(texts, embeddings=embeddings) fig.savefig("file.png")`
I encounter the following error.
>
> `NameError Traceback (most recent call last)
> Cell In[28], line 2
> 1 # import datamapplot
> ----> 2 fig = topic_model.visualize_document_datamap(texts, embeddings=embeddings)
> 3 fig.savefig("file.png")
>
> File d:\software\miniconda3\envs\transformers\lib\site-packages\bertopic\_bertopic.py:2620, in BERTopic.visualize_document_datamap(self, docs, topics, embeddings, reduced_embeddings, custom_labels, title, sub_title, width, height, **datamap_kwds)
> 2618 check_is_fitted(self)
> 2619 check_documents_type(docs)
> -> 2620 return plotting.visualize_document_datamap(
> 2621 self,
> 2622 docs,
> 2623 topics,
> 2624 embeddings,
> 2625 reduced_embeddings,
> 2626 custom_labels,
> 2627 title,
> 2628 sub_title,
> 2629 width,
> 2630 height,
> 2631 **datamap_kwds,
> 2632 )
>
> File d:\software\miniconda3\envs\transformers\lib\site-packages\bertopic\plotting\_datamap.py:148, in visualize_document_datamap(topic_model, docs, topics, embeddings, reduced_embeddings, custom_labels, title, sub_title, width, height, **datamap_kwds)
> ...
> 155 **datamap_kwds,
> 156 )
> 158 return figure
>
> NameError: name 'datamapplot' is not defined`
Even after I installed datamapplot, I still get the above error. Moreover, after restarting the computer and trying to run it again, the following error appears as well. I can only resolve the following issue by uninstalling both bertopic and datamapplot, and then reinstalling bertopic.
```
>
> `---------------------------------------------------------------------------
> FileNotFoundError Traceback (most recent call last)
> Cell In[1], [line 2](vscode-notebook-cell:?execution_count=1&line=2)
> [1](vscode-notebook-cell:?execution_count=1&line=1) import pandas as pd
> ----> [2](vscode-notebook-cell:?execution_count=1&line=2) from bertopic import BERTopic
> [3](vscode-notebook-cell:?execution_count=1&line=3) from sentence_transformers import SentenceTransformer
> [4](vscode-notebook-cell:?execution_count=1&line=4) from umap import UMAP
>
> File d:\software\miniconda3\envs\transformers\lib\site-packages\bertopic\__init__.py:3
> [1](file:///D:/software/miniconda3/envs/transformers/lib/site-packages/bertopic/__init__.py:1) from importlib.metadata import version
> ----> [3](file:///D:/software/miniconda3/envs/transformers/lib/site-packages/bertopic/__init__.py:3) from bertopic._bertopic import BERTopic
> [5](file:///D:/software/miniconda3/envs/transformers/lib/site-packages/bertopic/__init__.py:5) __version__ = version("bertopic")
> [7](file:///D:/software/miniconda3/envs/transformers/lib/site-packages/bertopic/__init__.py:7) __all__ = [
> [8](file:///D:/software/miniconda3/envs/transformers/lib/site-packages/bertopic/__init__.py:8) "BERTopic",
> [9](file:///D:/software/miniconda3/envs/transformers/lib/site-packages/bertopic/__init__.py:9) ]
>
> File d:\software\miniconda3\envs\transformers\lib\site-packages\bertopic\_bertopic.py:49
> [46](file:///D:/software/miniconda3/envs/transformers/lib/site-packages/bertopic/_bertopic.py:46) from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
> [48](file:///D:/software/miniconda3/envs/transformers/lib/site-packages/bertopic/_bertopic.py:48) # BERTopic
> ---> [49](file:///D:/software/miniconda3/envs/transformers/lib/site-packages/bertopic/_bertopic.py:49) from bertopic import plotting
> [50](file:///D:/software/miniconda3/envs/transformers/lib/site-packages/bertopic/_bertopic.py:50) from bertopic.cluster import BaseCluster
> [51](file:///D:/software/miniconda3/envs/transformers/lib/site-packages/bertopic/_bertopic.py:51) from bertopic.backend import BaseEmbedder
>
> File d:\software\miniconda3\envs\transformers\lib\site-packages\bertopic\plotting\__init__.py:7
> [5](file:///D:/software/miniconda3/envs/transformers/lib/site-packages/bertopic/plotting/__init__.py:5) from ._term_rank import visualize_term_rank
> ...
> [1118](file:///D:/software/miniconda3/envs/transformers/lib/pathlib.py:1118) def _opener(self, name, flags, mode=0o666):
> [1119](file:///D:/software/miniconda3/envs/transformers/lib/pathlib.py:1119) # A stub for the opener argument to built-in open()
> -> [1120](file:///D:/software/miniconda3/envs/transformers/lib/pathlib.py:1120) return self._accessor.open(self, flags, mode)
>
> FileNotFoundError: [Errno 2] No such file or directory: 'd:\\software\\miniconda3\\envs\\transformers\\lib\\site-packages\\datamapplot\\deckgl_template.html'`
```
### Reproduction
_No response_
### BERTopic Version
0.16.3
|
closed
|
2024-10-06T16:42:08Z
|
2024-12-09T12:16:43Z
|
https://github.com/MaartenGr/BERTopic/issues/2170
|
[
"bug"
] |
LauwXiang
| 5
|
matterport/Mask_RCNN
|
tensorflow
| 2,601
|
GPU running out of memory on Jetson Xavier NX
|
Hi, I am running the default coco dataset for my own segmentation use case. I am using a jetson Xavier nx with jetpack 4.5 and TensorFlow 1.15.4. It is taking forever to load the model and TensorFlow is taking too long to process even a single image. I am not sure if it is even using the inbuilt GPU. The jetson has only TensorFlow 1.15.5 or TensorFlow 2. So I'm not sure if it's the TensorFlow version that is causing the lag. How do I go about this?
Its taking forever to load the coco model file (h5py file). How do I check the memory usage and if the GPU is being used at all or not?
|
open
|
2021-06-16T10:34:41Z
|
2021-06-16T10:34:41Z
|
https://github.com/matterport/Mask_RCNN/issues/2601
|
[] |
akshayacharya97
| 0
|
widgetti/solara
|
jupyter
| 861
|
A question about theming / styling / etc..
|
Hi,
A question about theming in solara.
I have a `assets/theme.js` in the root of my project, a file with which I hope to control the theme(s) of the application. It seems to be picked up correctly by solara, and works well.
My question is, how can I modify the.. (I do not know what i the correct terminology here) the "true background" of the application, the "canvas" that goes black for example when one is using the pre-defined dark-mode (also the text letters go white in that case). Is that something that can be modified per app?
[A small pycafe example that hopefully shows what I am talking about](https://py.cafe/snippet/solara/v1?pycafe-app-view=false&pycafe-edit-enable=true#c=H4sIAHyINmcEA-1Y3W7jNhZ-FcFz4RSINZYs_wUQsLEnngK73Q0GKfaiKQpaoi2OJVJDUknUQd6936FkS04nmPS6lS9snnP4nf9Dyl8HiUr54GogilJp6xmVM83u5b38V_PTTxQ4kkt7L1O-827Znl_8cEUSHp5HYbN2k79WeVXIC5aLvYyHCbZwPSRREqTnTJjp9MLYOufx8FGkNrvypuNx-XS2gZ6zTY2GlyL0tBJ3_Mle3A9-5Hmu7gc__FmuD_dJPX4Ti55zvDu13-fcg2eZfZ8yffAKhO3qmxroaXfnbOvbjBdC7v07fPMG54JLts35b6yyKt6w3PBvwLQQq8paJS-AxPN4eKtFwXQ9vPQSlSsdD8uW8FaAFUsOe60qmXYY2472Vpg7XXGvt--vYR2DmnFvePfp5xtvdb3-98dP__v5vx-G3tYp8kymqjz1ttxDABt0T-3cYpgw-cAMlAqf-47UqcWSWc9WWhrPJUrI5psS5iNhg8uB5l8qoZEOaQ1Kv7HqXoJj65KaoaFgXdY2UxKUslapSPnoYeyHUz8AK2e1quzg6uvggWsjSCoEtFIWZUXkFktjdTlIMpGnmkPolxPHsq3hFkxX_4OrYDq-HKBW_t8sJ83qR05F1yxFim07kfMVUA3XayUtE5LrVzSQ6GjbyEKkZIQ7GDz_-nx5kjlZ8S0AMLt9rCz9sh6c7-3YyEPB_c_mVQFmoMi87-T6ZrwhTN8PTevq98JyjMiZASc1WYsazKdv0YkhBzLLv6f0KEda6fN86dKDAoSMZAXJtBHyeVHaeqPylOvbnCU8cz8JGEpQtRB1UT7fdgpsX-6h4lbsajd9jBd7X5uudJPs6riip50kV97w3U2wurmZo7_wvH_v_cSqB94JGg50tJQTXS9uZusliULwlmFGftSct4cDPSyhg4BEN5vNch01qCT9HzLB-wjXOumuj50di830hsBP0nsC70l1O02ld4jU2TY8b9jJtVba7dtMJ9Px8LJjCblTxInC6-lm2ueYCn4ZQ8zZbLWaXfeZj0xLIffE3GzW1-HiyHxuv2kgvRb82XWwWgKOyLD-FideJ3cW-8licTNZkyTkNkpzg3C-Gv3Vehx-aGBJ_qfKWBzBnWgXGhKProNoQdgQ_UDj84UdXbSjkD4NMITXGdOJYnkn24vv8po861jH-IbL1Wwz63N68V0E6_kCddMxz-K7mkdwq2FSfJ8xyHu98U9T_NMUr0f_b9UUv-LSopIDnTp0WCEGaIV3XsYMLt_peLYY83AZbKMw2LHtbJsGk0UwjqJFuJjOd_eSyVqoOI78yB9jpfdKhqNktxNxHE784AVxtBUyhT0G3MAP_bH3Dlew5EBCOH7jOPAnbovBde_AJckBOQDFWk2rFpNSkLA8j2NcvRyB494sdlVuVFVGZFAQ-iHoOWdJFsczf0xiCRbcKpUDawpdQE64xkaydwz0uR9iV-NA4Ae41WGFROEsHUmlC7zH_M51HE-wmQRzkRzieAGr5lipomgsAivl22pf1gQD9snRFHWpmVXAADjB4wWqMjx9Kpw3cyLxJ57AHbkn_53DO4Tks1HSwP6CETl0DlHeKH9kEayNQPmSSlLqTM-CgEADl52sKpiE-S46brdIJbAmtE2U9YFriXs8YoXbLDwAiS66Mbk3I7SWMNpzCesohm30wcD0dBFzLjVrhNW53-oqa1xp97iTNPGagmRUWiEYuCdT-GkzBD_zFMHHaulWQn5moXOPbKIYlEqgUF0SHHI_LvBs7k9Aq2Ar1yMkCCVNGuFVj4wZQAlAujsif4Coc2rWh8ClEJd5lwmoOyXynD3CT3eTc_tRWieBnG1HZb0vjtiuwHu8U1DImwD7MN4OqXpESFCO3XIkLHBOThO5Kg3bwQ-qEbfRlrmyudiOhMxx6yR1rjALhNlVl4thIYytiEtQWMttkuO2KBKXrJ6LxGjCh51OKyhKwl2QUKgzKhy53VFfgIKKGzsKxuuImVomNBoCRJN2Ksu3SqFQZwhPT0dLH5lMtM1DEFXRtE7odCio1HjTQpTmyBngSiZTlaD6EUVQ23IvmTZQOUbHIR4lfyp5AsMiH6V00liiZXHFRlO7AM2pEkuBvwZo_sBLB58zS26lgtAjGIU4lVoVHFeHypzKCuY28mCVdkST5SBAp8hGCHxpqFEoNEt_0VmA279WNLUbAwig0hwFyFyWQmd9nZA73FVeSM6digj5dv1Y1oWrDLw5Y1aii8CDB0viNG-mmbVlg-jE8UpqaMQTCe3ltLiOTpnljaUhYke-NnRqrVGO_ycaM2AssWpGs4pGqkP9vfgCJkZML8gaYxdv7JQZlwt6u0ZZOOMnrs31LplMJssRfBZQr6AAeJRt4iwXs5cc6NIiwSwPJsAkBKwoEzT9jmrK1LguwQbInMwxXKah1TjUTqKY6FXZHgUzGkjw2ogn4iO6gDdS4BhwFewOiub93y3dZGrWp_Hwgly5A-RIQqcKjBfyHhMKFIsDjKIOvHbc0CUj59a6opy4sYrappmSupKGP9iIQ6FOjMFABLiTUbqRcI0CF4UFCnxCQ0SES2-aFJM_5RmmoXrDaLyg6oIYzpuzUopQmuBUWmC6FdQSsK1Vi3mCOTMhHJfN6kFgpiLfsB0O3ctHZpMsVTjBzgvD0enUggLox-aOlbj_PNp6hemPHOMG__FAFOXlcg4SxznbXiJo0sJCEA0QcEQf-5Jy3Eg3DMiiahBE0Jp5ixvD0dXGRFJoDG5j5ksuLMdq8PwHxz7yjwkVAAA)
(in my app the theme is applied, here it is applied to the pycafe component - that was not my intention but funny side-effect).
In general, is there a prefered way to set the true background to a user defined colour/picture/gradient etc..
In addition, regarding the `solara.Style(..)` component.
Say we defined a CSS file that has bunch of classes.
1. Where in our component / page structure should we load the file (i.e. where should the `solara.Style(myccs.css)` so that the relevant components can have access to the defined classes in the .css file
2. Sometimes..(through a lot of trial and error) i found out that I need to do something like a .. "hard overwrite" of the vue classes to make the styling work, i.e. my css file has something like
```css
.logo-button.v-btn {
background: none !important;
box-shadow: none !important;
/*other definitions */
}
```
instead of
```css
.logo-button {
background: none !important;
box-shadow: none !important;
/*other definitions */
}
```
which was what I expected. (not, this example is in a custom Layout component..)
Then I apply the css like this in python
```python
solara.Style('path/to/mycss.css')
solara.Button(children=list_of_kids, on_click=my_callable, flat=True, classes=['logo-button'])
```
Is there some kind of best practice for this, or an expectation of how styling should be approached?
Many many thanks!
|
open
|
2024-11-14T23:41:01Z
|
2024-12-25T12:58:44Z
|
https://github.com/widgetti/solara/issues/861
|
[] |
JovanVeljanoski
| 6
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.