repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
python-gino/gino
|
sqlalchemy
| 141
|
Handle SQLAlchemy prefetch
|
* GINO version: 0.5, 0.6
* Python version: 3.6
### Description
Some clause element may cause prefetch in SQLAlchemy, for example:
```
User.insert().values(nickname=random_name)
```
(without `returning`)
|
closed
|
2018-02-18T09:17:09Z
|
2018-03-19T08:59:02Z
|
https://github.com/python-gino/gino/issues/141
|
[
"bug",
"help wanted"
] |
fantix
| 0
|
axnsan12/drf-yasg
|
django
| 307
|
Parameter 'items' property interferes with OrderedDict 'items' method
|
When trying to define an array parameter, the items definition is stored in an object property named `items`:
```
class Parameter(SwaggerDict):
def __init__(self, name, in_, description=None, required=None, schema=None,
type=None, format=None, enum=None, pattern=None, items=None, default=None, **extra):
...
self.items_ = items
```
However, `items` is the name of a method used by OrderedDicts and other dictionaries to iterate across them.
|
closed
|
2019-02-06T03:51:42Z
|
2019-02-21T22:57:55Z
|
https://github.com/axnsan12/drf-yasg/issues/307
|
[] |
PaulWay
| 1
|
blacklanternsecurity/bbot
|
automation
| 1,457
|
Occasional Newlines in URLs
|
```
[URL_UNVERIFIED] https://uk.yahoo.com/news/ex-spandau-ballet-singer-complimented-080549530.html excavate (endpoint, extension-html, in-scope, spider-danger)
[URL_UNVERIFIED] https://tw.news.yahoo.com/ç©æ¡é
°å±
風波-鿤-éå½ç¼ç
§-æµåº-ç¶²åæ
[INFO] wispy_taylor: Modules running (incoming:processing:outgoing) cloud(506:1:0), dns(0:0:506), httpx(192:207:0), excavate(179:1:0)
```
Excavate's `a-tag` regex seems to be responsible:
```
"discovery_context": "excavate's URL extractor (a-tag regex) found URL_UNVERIFIED: https://tw.news.yahoo.com/äºåé ä¸é£¯è-å³åè
åè çµ²-024022257.html in HTTP response body",
```
@liquidsec
|
closed
|
2024-06-13T13:00:42Z
|
2025-02-11T16:56:08Z
|
https://github.com/blacklanternsecurity/bbot/issues/1457
|
[
"bug",
"low priority"
] |
TheTechromancer
| 2
|
laurentS/slowapi
|
fastapi
| 111
|
how to control limit by duration?
|
I want to set min duration between each requests. For example I want at least 30s duration between each request.How can I achieve it?
|
closed
|
2022-08-30T01:14:12Z
|
2022-08-31T12:39:35Z
|
https://github.com/laurentS/slowapi/issues/111
|
[] |
moo611
| 4
|
deepfakes/faceswap
|
deep-learning
| 999
|
Train Error: Caught exception in thread: '_training_0'
|
I get an error with training on Linux and Windows :
Linux:
When I train on Linux,I only get this:
```
Setting Faceswap backend to NVIDIA
03/27/2020 09:43:23 INFO Log level set to: INFO
Using TensorFlow backend.
```
Windows:
I get this error windows:
```
Setting Faceswap backend to NVIDIA
03/27/2020 09:45:45 INFO Log level set to: INFO
Using TensorFlow backend.
03/27/2020 09:45:48 INFO Model A Directory: E:\fffan\DeepFake\FacesWap\faceswap-master\input\input_A
03/27/2020 09:45:48 INFO Model B Directory: E:\fffan\DeepFake\FacesWap\faceswap-master\input\input_B
03/27/2020 09:45:48 INFO Training data directory: E:\fffan\DeepFake\FacesWap\faceswap-master\models
03/27/2020 09:45:48 INFO ===================================================
03/27/2020 09:45:48 INFO Starting
03/27/2020 09:45:48 INFO Press 'ENTER' to save and quit
03/27/2020 09:45:48 INFO Press 'S' to save model weights immediately
03/27/2020 09:45:48 INFO ===================================================
03/27/2020 09:45:49 INFO Loading data, this may take a while...
03/27/2020 09:45:49 INFO Loading Model from Original plugin...
03/27/2020 09:45:49 INFO No existing state file found. Generating.
03/27/2020 09:45:50 CRITICAL Error caught! Exiting...
03/27/2020 09:45:50 ERROR Caught exception in thread: '_training_0'
```
Why get this error when Train, and why the different error in Linux and Windows?
Linux : tensorflow==1.14; cuda == 9.0; cudnn==7.4.2
Windows : tensorflow==1.14; cuda == 10.0; cudnn==7.6.1
|
closed
|
2020-03-27T02:13:31Z
|
2020-12-06T05:08:11Z
|
https://github.com/deepfakes/faceswap/issues/999
|
[] |
Tian14267
| 6
|
sammchardy/python-binance
|
api
| 568
|
client order_market_buy call is failing
|
The client order_market_buy call is failing with this error
binance.exceptions.BinanceAPIException: APIError(code=-1106): Parameter 'quoteOrderQty' sent when not required.
**To Reproduce**
api_test_key = 'sjdflkdsjf'
api_test_secret = 'klfdsajflk'
client = Client(api_test_key, api_test_secret)
client.API_URL = 'https://testnet.binance.vision/api'
client.order_market_buy(
quantity= 1,
quoteOrderQty = 10,
symbol='BTCUSDT')
**Expected behavior**
The order should be placed on the test chain and I should be able to inspect its status using its id.
According to documentation the quantity will be calculated based on the quoteOrderQty and the market availability.
**Environment (please complete the following information):**
- Python version: 3.7.4
- Catalina and windows 10
- python-binance version v0.7.5
|
open
|
2020-08-11T12:07:15Z
|
2020-09-01T09:12:17Z
|
https://github.com/sammchardy/python-binance/issues/568
|
[] |
iamveritas
| 3
|
explosion/spaCy
|
machine-learning
| 13,551
|
Italian & Spanish NER shouldn't extract "Google" or "Facebook"?
|
Extracting entities from news articles I've realized this behavior:

These words are present in articles but are not extracted by the models.
Does anyone know the reason?
## Info about spaCy
- **spaCy version:** 3.7.5
- **Platform:** Linux-6.1.85+-x86_64-with-glibc2.35
- **Python version:** 3.10.12
- **Pipelines:** es_core_news_lg (3.7.0), it_core_news_lg (3.7.0)
|
open
|
2024-07-01T09:48:10Z
|
2024-08-05T13:06:48Z
|
https://github.com/explosion/spaCy/issues/13551
|
[] |
davgargar
| 1
|
robinhood/faust
|
asyncio
| 128
|
Fast creation of pandas dataframe from streamed data
|
Since you mentioned pandas also as an option to use together with faust in your documentation I was wondering if anybody can recommend a **fast** creation approach inside of an agent based on previously saved (rocksdb) and newly incoming stream data as an example.
Sample code also for later use as a cheatsheet in the documentation would be very appreciated.
|
open
|
2018-08-02T10:33:54Z
|
2020-02-27T23:19:23Z
|
https://github.com/robinhood/faust/issues/128
|
[
"Status: Help Wanted",
"Component: Serializers",
"Component: Models",
"Issue Type: Documentation",
"Issue-Type: Feature Request"
] |
trbck
| 2
|
google-research/bert
|
nlp
| 910
|
`Whole Word Masking` does not work in this code at 'create_pretraining_data.py'
|
Hi, there :)
First of all, thanks for sharing such a nice code of bert for begginers to easily follow.
I made an issue to share that `whole word masking` does not work even though I set it as True.
As I read code at (
https://github.com/google-research/bert/blob/cc7051dc592802f501e8a6f71f8fb3cf9de95dc9/create_pretraining_data.py#L388), **indexes** in index_set are not processed as it is desined.
I think it can be easily solved by replace line 388~405 at create_pretraining_data.py as
`masked_token = None # 80% of the time, replace with "[MASK]"
is_ori_token = False # 10% of the time, keep original and 10% of the time, replace with random word
if rng.random() < 0.8:
masked_token = "[MASK]"
else:
if rng.random() < 0.5:
# 10% of the time, keep original
is_ori_token = True
for index in index_set:
covered_indexes.add(index)
if masked_token is not None:
if is_ori_token:
# 10% of the time, keep original
masked_token = tokens[index]
else:
# 10% of the time, replace with random word
masked_token = vocab_words[rng.randint(0, len(vocab_words) - 1)]
output_tokens[index] = masked_token
masked_lms.append(MaskedLmInstance(index=index, label=tokens[index]))
`
If it is wrong, please let me know. 👍
|
open
|
2019-11-12T06:58:34Z
|
2022-12-21T17:10:32Z
|
https://github.com/google-research/bert/issues/910
|
[] |
alberto-hong
| 3
|
RobertCraigie/prisma-client-py
|
pydantic
| 1,008
|
Improve error message when running prisma with an outdated client
|
## Problem
When upgrading to a newer version of prisma python client - let's say, 0.13.1 to 0.14.0, if you forget to run `prisma generate` again, you'll get an error message like this
> File "/Users/adeel/Documents/GitHub/my-project/backend/utils/db.py", line 36, in get_db
_db = Prisma(auto_register=True, datasource={"url": get_db_uri(application_name)})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/adeel/Library/Caches/pypoetry/virtualenvs/my-project-cJkWU15t-py3.12/lib/python3.12/site-packages/prisma/client.py", line 156, in __init__
self._set_generated_properties(
TypeError: BasePrisma._set_generated_properties() missing 1 required keyword-only argument: 'preview_features'
It's not very obvious based on this `preview_features` message that this error actually stems from an outdated client.
## Suggested solution
The prisma library already knows `prisma.__version__`. We should also track the version used for generating the client - and if it's missing or mismatched, raise an error.
## Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
|
open
|
2024-08-11T16:16:51Z
|
2024-08-18T16:18:34Z
|
https://github.com/RobertCraigie/prisma-client-py/issues/1008
|
[
"kind/improvement",
"level/intermediate",
"priority/medium",
"topic: dx"
] |
AdeelK93
| 0
|
FactoryBoy/factory_boy
|
django
| 418
|
How to implement a factory with a field of Foreign Key to self?
|
```python
class EventType(models.Model):
"""Event Type Model"""
category = models.ForeignKey('EventType', null=True, blank=True, verbose_name='Category')
name = models.CharField(max_length=50, unique=True, verbose_name='Event Type')
```
How can I make a factory for this model?
|
closed
|
2017-09-14T10:32:52Z
|
2017-09-18T07:13:07Z
|
https://github.com/FactoryBoy/factory_boy/issues/418
|
[] |
NikosVlagoidis
| 1
|
vimalloc/flask-jwt-extended
|
flask
| 61
|
Docs - porting from flask-jwt
|
Thanks for the hard work.
Looking though the docs it appears this extension is similar to flask-jwt but not identical. Would be nice to have a section in the docs on how to port to the newer library and any associated concerns there might be. So far I've noticed:
- JWT --> JWTManager(app)
- Need to create an auth view method
- Config:
- JWT_AUTH_HEADER_PREFIX --> JWT_HEADER_TYPE
- JWT_EXPIRATION_DELTA --> JWT_ACCESS_TOKEN_EXPIRES
- No leeway
Is that a good start?
|
closed
|
2017-07-03T21:22:25Z
|
2017-10-13T19:32:55Z
|
https://github.com/vimalloc/flask-jwt-extended/issues/61
|
[] |
mixmastamyk
| 1
|
zalandoresearch/fashion-mnist
|
computer-vision
| 62
|
Link in arxiv has extra period at the end
|
Hey guys,
Thank you for publishing this new dataset.
No an issue really wiht the dataset, but the link that you have in the arxiv's abstract has an extra period at the end.
Regards
|
closed
|
2017-09-09T23:00:13Z
|
2017-09-14T17:16:23Z
|
https://github.com/zalandoresearch/fashion-mnist/issues/62
|
[] |
DataWaveAnalytics
| 1
|
google-deepmind/graph_nets
|
tensorflow
| 2
|
ImportError: No module named tensorflow_probability
|
Hi, I run into a problem about some **environmental** problem ...
I could not run your demo either on Colab on my local pc. All of them reports ModuleNotFoundError with respect to 'tensorflow_probability'.
For Colab of `shortest_path.ipynb`, change the install setting to be `install_graph_nets_library = "yes"`, it just reports error like:
```
ImportErrorTraceback (most recent call last)
<ipython-input-5-c414b43b0fe0> in <module>()
8 import time
9
---> 10 from graph_nets import graphs
11 from graph_nets import utils_np
12 from graph_nets import utils_tf
/usr/local/lib/python2.7/dist-packages/graph_nets/__init__.py in <module>()
19 from __future__ import print_function
20
---> 21 from graph_nets import blocks
22 from graph_nets import graphs
23 from graph_nets import modules
/usr/local/lib/python2.7/dist-packages/graph_nets/blocks.py in <module>()
36 from graph_nets import graphs
37 from graph_nets import utils_tf
---> 38 import sonnet as snt
39 import tensorflow as tf
40
/usr/local/lib/python2.7/dist-packages/sonnet/__init__.py in <module>()
40 import semantic_version
41
---> 42 from sonnet.python import custom_getters
43 from sonnet.python.modules import experimental
44 from sonnet.python.modules import nets
/usr/local/lib/python2.7/dist-packages/sonnet/python/custom_getters/__init__.py in <module>()
19 from __future__ import print_function
20
---> 21 from sonnet.python.custom_getters import bayes_by_backprop
22 from sonnet.python.custom_getters.context import Context
23 from sonnet.python.custom_getters.non_trainable import non_trainable
/usr/local/lib/python2.7/dist-packages/sonnet/python/custom_getters/bayes_by_backprop.py in <module>()
92
93 import tensorflow as tf
---> 94 import tensorflow_probability as tfp
95
96 _DEFAULT_SCALE_TRANSFORM = tf.nn.softplus
ImportError: No module named tensorflow_probability
```
Similar error is witnessed locally.
Thank you for you help!
|
closed
|
2018-10-19T15:43:18Z
|
2019-07-08T06:23:10Z
|
https://github.com/google-deepmind/graph_nets/issues/2
|
[] |
murphyyhuang
| 3
|
gunthercox/ChatterBot
|
machine-learning
| 1,721
|
-
|
closed
|
2019-04-29T23:06:10Z
|
2023-04-14T23:44:29Z
|
https://github.com/gunthercox/ChatterBot/issues/1721
|
[] |
dtoxodm
| 3
|
|
plotly/dash
|
plotly
| 3,065
|
Dash in Jupyter tries to bind to $HOST set by conda, cannot override
|
**Summary**
Micromamba, and supposedly other python package managers set the HOST environment variable to something like x86_64-conda-linux-gnu. Then, when I try to run a dash app in Jupyter, I get an error as dash tries to bind to host "x86_64-conda-linux-gnu", which obviously doesn't resolve to a valid IP. Here's a stackoverflow post about the error:
https://stackoverflow.com/questions/61006240/name-or-service-not-known-when-running-a-dash-app
In the above case, the solution was to use the host parameter of app.run().
_However, the parameter does not seem to override $HOST in a Jupyter environment._
Packages:
```
dash 2.18.1
jupyter_client 8.6.3
jupyter_core 5.7.2
jupyter-events 0.10.0
jupyter-lsp 2.2.5
jupyter_server 2.14.2
jupyter_server_terminals 0.5.3
jupyterlab 4.2.5
jupyterlab_pygments 0.3.0
jupyterlab_server 2.27.3
jupyterlab_widgets 3.0.13
```
|
open
|
2024-11-07T20:11:26Z
|
2024-11-11T14:45:17Z
|
https://github.com/plotly/dash/issues/3065
|
[
"bug",
"P2"
] |
discapes
| 0
|
Miksus/rocketry
|
automation
| 193
|
Passing a mongoclient to MongoRepo does not work. Rocketry.run() fails.
|
Hi, cool project, but is this supposed to work?
```
client= MongoClient(
"mongodb+srv://<user>:<pass>@<blabla>.a5qhe.mongodb.net/<SomeDb>?retryWrites=true&w=majority",
server_api=ServerApi("1"),
)
mongo_repo = MongoRepo(
client=client,
model=TaskRunRecord,
database="MyDatabase",
collection="MyCollection",
)
app = Rocketry(logger_repo=mongo_repo)
if __name__ == "__main__":
app.run()
```
**I get this Error:**
pymongo.errors.ServerSelectionTimeoutError: localhost:27017: [WinError 10061] No connection could be made because the target machine actively refused it,
**Additional context**
I`m on windows and using Rocketry 2.5.1. No problems what so ever with the MongoClient. I have tried several different kinds of configurations, without luck. I did manage to write few records if I instead provided the uri to MongoRepo, and set the kw arg id_field to run_id. Do not know if this is mapped to the db side _id, but it did not look like an ObjectId. I do not want under any circumstance to have anything else than a proper ObjectId for that field. Any way to make this happen, or any way I can contribute to make it happen?
**UPDATE**
Ok, so I realize that this issue belongs in the redbird repo. With client set, MongoSession.get_bind() tries to create a client with url of None.
|
open
|
2023-02-16T15:12:20Z
|
2023-02-20T10:26:03Z
|
https://github.com/Miksus/rocketry/issues/193
|
[
"bug"
] |
StigKorsnes
| 1
|
exaloop/codon
|
numpy
| 317
|
Throws error on lines ending with backslash
|
A backslash at the end of a line seems to cause codon to throw an error: unexpected dedent. Python does not throw this error. Note that Python allows a backslash after a comma or a plus sign or the like, where it is obvious from the context that the line is incomplete. Codon does not accept this. Codon's parser should be modified to accept precisely the same syntax as python without throwing a syntax error. There are massive corpuses of python on the web that can be fed in to verify identical syntactic processing.
|
closed
|
2023-04-02T21:06:46Z
|
2023-04-12T22:14:02Z
|
https://github.com/exaloop/codon/issues/317
|
[] |
codeisnotcode
| 2
|
litestar-org/litestar
|
api
| 3,465
|
Bug: Can't convert sqlalchemy model to pydantic model which is inherited from BaseModel
|
### Description
**I tried to convert the sqlalchemy model to pydantic BaseModel but I get an error**:
using the `pydantic.type_adapter.TypeAdapter`:
```py
ValidationError: 1 validation error for list[User]
Input should be a valid list [type=list_type,
input_value=<app.database.models.user...t at 0x000002DCABD17710>,
input_type=User]
```
using the `to_schema` method of the `advanced_alchemy.service.SQLAlchemyAsyncRepositoryService` class:
```py
ValidationError: 1 validation error for list[User]
0
Input should be a valid dictionary or instance of User [type=model_type,
input_value=<app.database.models.user...t at 0x0000018BE6647990>,
input_type=User]
```
### URL to code causing the issue
https://github.com/monok8i/backend-microservice-structure/blob/main/backend/users-service/app/domain/users/services.py
### MCVE
My Pydantic model:
```python
class UserBase(BaseModel):
email: Optional[EmailStr] = None
is_active: Optional[bool] = True
is_superuser: bool = False
is_activated: bool = False
class User(UserBase):
id: int
```
My SQLAlchemy Model:
```py
class Base(AsyncAttrs, DeclarativeBase):
__table_args__ = {"extend_existing": True}
@declared_attr.directive
def __tablename__(cls) -> str:
return f"{cls.__name__.lower()}s"
@declared_attr
def id(cls) -> Mapped[int]:
return mapped_column(Integer, primary_key=True, index=True)
created_at: Mapped[datetime] = mapped_column(
TIMESTAMP(timezone=True), server_default=func.now()
)
updated_at: Mapped[datetime] = mapped_column(
TIMESTAMP(timezone=True), server_default=func.now(), server_onupdate=func.now()
)
class User(Base):
email: Mapped[str] = mapped_column(String, unique=True)
hashed_password: Mapped[str] = mapped_column(String)
is_active: Mapped[bool] = mapped_column(Boolean, default=True)
is_superuser: Mapped[bool] = mapped_column(Boolean, default=False)
is_activated: Mapped[bool] = mapped_column(Boolean, default=False)
```
The first attempt to convert the sqlalchemy model using `pydantic.type_adapter.TypeAdapter` in the same way as in the inherited `to_schema` method but without `from_attributes` parameter: **(ValidationError)**
```py
from app.domain.users.schemas import User as PydanticUser
class UserService(SQLAlchemyAsyncRepositoryService[User]):
repository_type: SQLAlchemyAsyncRepository[User] = UserRepository
def __init__(
self,
session: AsyncSession | async_scoped_session[AsyncSession],
**repo_kwargs: Any,
) -> None:
self.repository = self.repository_type(
session=session,
**repo_kwargs,
)
self.model = self.repository.model_type
super().__init__(
session, statement, auto_expunge, auto_refresh, auto_commit, **repo_kwargs
)
async def get_users(self) -> OffsetPagination[PydanticUser]:
results, count = await self.list_and_count()
results = [
TypeAdapter(PydanticUser).validate_python(user) for user in results
]
return self.to_schema(data=results, total=count)
```
The second attempt to convert the sqlalchemy model using the inherited `to_schema` method with no other parameters: **(ValidationError)**
```py
async def get_users(self) -> OffsetPagination[PydanticUser]:
results, count = await self.list_and_count()
return self.to_schema(data=results, total=count, schema_type=PydanticUser)
```
The attempts to convert a database model in Pydantic but using the `model_validate` method of the `BaseModel` class and `TypeAdapter` with parameter `from_attributes`:
```py
# first
async def get_users(self) -> OffsetPagination[PydanticUser]:
results, count = await self.list_and_count()
results = [PydanticUser.model_validate(user, from_attributes=True) for user in results]
return self.to_schema(data=results, total=count)
```
```py
# second
async def get_users(self) -> OffsetPagination[PydanticUser]:
results, count = await self.list_and_count()
results = TypeAdapter(list[PydanticUser]).validate_python([user for user in results], from_attributes=True)
return self.to_schema(data=results, total=count, schema_type=PydanticUser)
```
```py
# third
async def get_users(self) -> OffsetPagination[PydanticUser]:
results, count = await self.list_and_count()
results = [TypeAdapter(PydanticUser).validate_python(user, from_attributes=True) for user in results]
return self.to_schema(data=results, total=count, schema_type=PydanticUser)
```
**And already this time everything works correctly, I get my result:**
```json
{
"items": [
{
"email": "admin@admin.admin",
"is_active": true,
"is_superuser": true,
"is_activated": true,
"id": 1
}
],
"limit": 1,
"offset": 0,
"total": 1
}
```
### Steps to reproduce
_No response_
### Screenshots
_No response_
### Logs
```bash
2024-05-03T16:33:59.571379Z [info ] HTTP Request method=GET path=/api/users path_params={} query={}
INFO - 2024-05-03 18:33:59,817 - sqlalchemy.engine.Engine - base - select pg_catalog.version()
INFO - 2024-05-03 18:33:59,818 - sqlalchemy.engine.Engine - base - [raw sql] ()
INFO - 2024-05-03 18:33:59,821 - sqlalchemy.engine.Engine - base - select current_schema()
INFO - 2024-05-03 18:33:59,821 - sqlalchemy.engine.Engine - base - [raw sql] ()
INFO - 2024-05-03 18:33:59,823 - sqlalchemy.engine.Engine - base - show standard_conforming_strings
INFO - 2024-05-03 18:33:59,823 - sqlalchemy.engine.Engine - base - [raw sql] ()
INFO - 2024-05-03 18:33:59,825 - sqlalchemy.engine.Engine - base - BEGIN (implicit)
INFO - 2024-05-03 18:33:59,829 - sqlalchemy.engine.Engine - base - SELECT users.email, users.hashed_password, users.is_active, users.is_superuser, users.is_activated, users.created_at, users.updated_at, users.id, count(users.id) OVER () AS anon_1
FROM users
INFO - 2024-05-03 18:33:59,830 - sqlalchemy.engine.Engine - base - [generated in 0.00030s] ()
2024-05-03T16:33:59.855384Z [error ] Uncaught Exception connection_type=http path=/api/users traceback= File "C:\Users\Swift\AppData\Local\pypoetry\Cache\virtualenvs\litestar-users-service-E9vRFPaJ-py3.11\Lib\site-packages\advanced_alchemy\service\_async.py", line 297, in to_schema
return to_schema(data=data, total=total, filters=filters, schema_type=schema_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Swift\AppData\Local\pypoetry\Cache\virtualenvs\litestar-users-service-E9vRFPaJ-py3.11\Lib\site-packages\advanced_alchemy\service\_converters.py", line 148, in to_schema
items=TypeAdapter(List[schema_type]).validate_python(data), # type: ignore[valid-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Swift\AppData\Local\pypoetry\Cache\virtualenvs\litestar-users-service-E9vRFPaJ-py3.11\Lib\site-packages\pydantic\type_adapter.py", line 260, in validate_python
return self.validator.validate_python(object, strict=strict, from_attributes=from_attributes, context=context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 1 validation error for list[User]
0
Input should be a valid dictionary or instance of User [type=model_type, input_value=<app.database.models.user...t at 0x0000018BE6647990>, input_type=User]
For further information visit https://errors.pydantic.dev/2.7/v/model_type
╭───────────────────── Traceback (most recent call last) ──────────────────────╮
│ C:\Users\Swift\AppData\Local\pypoetry\Cache\virtualenvs\litestar-users-servi │
│ ce-E9vRFPaJ-py3.11\Lib\site-packages\litestar\middleware\exceptions\middlewa │
│ re.py:219 in __call__ │
│ │
│ 216 │ │ │ None │
│ 217 │ │ """ │
│ 218 │ │ try: │
│ ❱ 219 │ │ │ await self.app(scope, receive, send) │
│ 220 │ │ except Exception as e: # noqa: BLE001 │
│ 221 │ │ │ litestar_app = scope["app"] │
│ 222 │
│ │
│ C:\Users\Swift\AppData\Local\pypoetry\Cache\virtualenvs\litestar-users-servi │
│ ce-E9vRFPaJ-py3.11\Lib\site-packages\litestar\routes\http.py:82 in handle │
│ │
│ 79 │ │ if route_handler.resolve_guards(): │
│ 80 │ │ │ await route_handler.authorize_connection(connection=reques │
│ 81 │ │ │
│ ❱ 82 │ │ response = await self._get_response_for_request( │
│ 83 │ │ │ scope=scope, request=request, route_handler=route_handler, │
│ 84 │ │ ) │
│ 85 │
│ │
│ ... 6 frames hidden ... │
│ │
│ C:\Users\Swift\AppData\Local\pypoetry\Cache\virtualenvs\litestar-users-servi │
│ ce-E9vRFPaJ-py3.11\Lib\site-packages\advanced_alchemy\service\_converters.py │
│ :148 in to_schema │
│ │
│ 145 │ │ total = total if total else len(data) │
│ 146 │ │ limit_offset = limit_offset if limit_offset is not None else L │
│ 147 │ │ return OffsetPagination[schema_type]( # type: ignore[valid-ty │
│ ❱ 148 │ │ │ items=TypeAdapter(List[schema_type]).validate_python(data) │
│ 149 │ │ │ limit=limit_offset.limit, │
│ 150 │ │ │ offset=limit_offset.offset, │
│ 151 │ │ │ total=total, │
│ │
│ C:\Users\Swift\AppData\Local\pypoetry\Cache\virtualenvs\litestar-users-servi │
│ ce-E9vRFPaJ-py3.11\Lib\site-packages\pydantic\type_adapter.py:260 in │
│ validate_python │
│ │
│ 257 │ │ Returns: │
│ 258 │ │ │ The validated object. │
│ 259 │ │ """ │
│ ❱ 260 │ │ return self.validator.validate_python(object, strict=strict, f │
│ 261 │ │
│ 262 │ def validate_json( │
│ 263 │ │ self, data: str | bytes, /, *, strict: bool | None = None, con │
ValidationError: 1 validation error for list[User]
0
Input should be a valid dictionary or instance of User [type=model_type,
input_value=<app.database.models.user...t at 0x0000018BE6647990>,
input_type=User]
For further information visit https://errors.pydantic.dev/2.7/v/model_type
```
### Litestar Version
2.8.2
### Platform
- [ ] Linux
- [ ] Mac
- [X] Windows
- [ ] Other (Please specify in the description above)
|
closed
|
2024-05-03T16:45:49Z
|
2025-03-20T15:54:41Z
|
https://github.com/litestar-org/litestar/issues/3465
|
[
"Bug :bug:"
] |
monok8i
| 1
|
flasgger/flasgger
|
flask
| 266
|
uiversion 3 with blank url after initializing default data
|
I overwrite the default data with following code:
```
# config.py
template = {
"swagger": "2.0",
"info": {
"title": "KG Service API & Algorithm API",
"description": "API for Knowledge Hub & Algorithm Hub",
"contact": {
"responsibleOrganization": "ME",
"responsibleDeveloper": "Me",
"email": "me@me.com",
"url": "www.privacy.com",
},
"termsOfService": "",
"version": "0.0.1"
},
"host": "mysite.com", # overrides localhost:500
"basePath": "/", # base bash for blueprint registration
"schemes": [
"http"
],
"operationId": "gskg_service_api"
}
# __init__.py
from flasgger import Flasgger
flask_swagger = Flasgger()
flask_swagger.init_app(app)
app.swag.template = template
```
Because I use factory pattern, I set the template with `app.swag.template` after looking up the source code.
The appearance on uiversion 3 looks strange just like screenshot below.
The problem is that `the developer - Website` link to blank not the `www.privacy.com`
<img width="623" alt="screen shot 2018-11-21 at 11 29 45 pm" src="https://user-images.githubusercontent.com/12616602/48851292-9d784a00-ede5-11e8-88f2-65fc585ecaad.png">
On version 2, it looks good
<img width="618" alt="screen shot 2018-11-21 at 11 40 27 pm" src="https://user-images.githubusercontent.com/12616602/48851798-dfee5680-ede6-11e8-981e-2e7c95847175.png">
I suppose this is the problem in frontend, please help, I like the more modern uiversion 3 style
|
closed
|
2018-11-21T15:45:15Z
|
2020-06-16T07:12:12Z
|
https://github.com/flasgger/flasgger/issues/266
|
[] |
huanghe314
| 1
|
wkentaro/labelme
|
deep-learning
| 898
|
No TIF support on Ubuntu 18.04
|
I tried with your newest version and 4.2.9 as recommend in #675 but in no case i was able to open TIF images. Any suggestion?
It work normally with JPG.
in iPython
```
from` PyQt5.Qt import PYQT_VERSION_STR
PYQT_VERSION_STR
'5.10.1'
QImageReader.supportedImageFormats()
[PyQt5.QtCore.QByteArray(b'bmp'),
PyQt5.QtCore.QByteArray(b'cur'),
PyQt5.QtCore.QByteArray(b'gif'),
PyQt5.QtCore.QByteArray(b'ico'),
PyQt5.QtCore.QByteArray(b'jpeg'),
PyQt5.QtCore.QByteArray(b'jpg'),
PyQt5.QtCore.QByteArray(b'pbm'),
PyQt5.QtCore.QByteArray(b'pgm'),
PyQt5.QtCore.QByteArray(b'png'),
PyQt5.QtCore.QByteArray(b'ppm'),
PyQt5.QtCore.QByteArray(b'svg'),
PyQt5.QtCore.QByteArray(b'svgz'),
PyQt5.QtCore.QByteArray(b'xbm'),
PyQt5.QtCore.QByteArray(b'xpm')]
```
|
closed
|
2021-07-26T16:39:25Z
|
2022-06-25T04:39:03Z
|
https://github.com/wkentaro/labelme/issues/898
|
[] |
Camilochiang
| 4
|
gradio-app/gradio
|
machine-learning
| 10,051
|
`gr.Image` fullscreen button doesn't appear if `interactive=True`
|
### Describe the bug
When using 'interactive=True' and 'show_fullscreen_button = Ture,
Fullscreen button doesnt appear
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
with gr.Blocks() as main:
image1 = gr.Image(interactive=Ture, show_download_button=True, show_fullscreen_button=True)
image2 = gr.Image(interactive=False, show_download_button=True, show_fullscreen_button=True)
```
### Screenshot

### Logs
_No response_
### System Info
```shell
gradio 5.6.0
```
### Severity
I can work around it
|
closed
|
2024-11-27T15:13:49Z
|
2024-11-27T20:03:13Z
|
https://github.com/gradio-app/gradio/issues/10051
|
[
"enhancement"
] |
tripleS-Dev
| 1
|
stanfordnlp/stanza
|
nlp
| 919
|
Constituency Parser does not produce output for every line
|
**Describe the bug**
English Constituency Parser does not produce output for every line.
**To Reproduce**
I am trying to compare the trees of a pair of documents.
`refdoc = open(os.path.join('.', 'drive', 'MyDrive', 'en.devtest'), 'r').read().split('\n')`
`hypdoc = open(os.path.join('.', 'drive', 'MyDrive', 'sw-en-hyp.txt'), 'r').read().split('\n')`
`print(len(refdoc), len(hypdoc))`
`1013 1013`
They are both English and they are both the same length in lines.
However, when I run:
`stanza.download('en')`
`nlp = stanza.Pipeline(lang='en', processors='tokenize,pos,constituency', tokenize_no_ssplit=True)`
`refdoc = nlp(refdoc)`
`hypdoc = nlp(hypdoc)`
`print(len(refdoc.sentences), len(hypdoc.sentences))`
`1012 1010`
**Expected behavior**
I expect doc.sentences to be the same length as the original unparsed doc, and I expect doc.sentences to be the same length for both docs. This is critical.
**Environment:**
- OS: Google Colab
- Python version: 3.7
- Stanza version: 1.3.0
**Additional context**
Each line may contain more or less than 1 complete sentence.
|
closed
|
2022-01-09T15:27:27Z
|
2022-01-11T02:45:03Z
|
https://github.com/stanfordnlp/stanza/issues/919
|
[
"bug"
] |
lhambrid
| 5
|
indico/indico
|
sqlalchemy
| 6,685
|
Add fa_IR to Indico
|
**Is your feature request related to a problem? Please describe.**
Hello All.
Hope you're doing well.
I'm a student of Isfahan university of technology, Iran.
My university need Persian language for Indico and I'm going to translate it, but as I saw in docs, I should wait till Transifex bring Persian.
**Describe the solution you'd like**
I translated little parts of Indico (4%), such as room booking section.
How I can commit the files to Indico?
**Describe alternatives you've considered**
We want to use room booking for next college term and we need it.
**Additional context**



|
closed
|
2025-01-02T02:08:30Z
|
2025-01-05T21:23:35Z
|
https://github.com/indico/indico/issues/6685
|
[
"enhancement"
] |
aforouz
| 6
|
charlesq34/pointnet
|
tensorflow
| 238
|
Preparing custom pointcloud dataset
|
Hey there, I have a Nuscenes point cloud dataset available and I will like to make use of it inside this Project. I looked at the `data_prep_util.py` script, however, I don't know where to start can anyone please help?
|
open
|
2020-03-30T09:07:20Z
|
2020-03-30T09:07:20Z
|
https://github.com/charlesq34/pointnet/issues/238
|
[] |
mtshikomba
| 0
|
pytorch/pytorch
|
python
| 149,422
|
Pip-installed pytorch limits threads to 1 when setting GOMP_CPU_AFFINITY (likely due to bundled GOMP)
|
### 🐛 Describe the bug
Pip-installed pytorch limits threads to 1 when setting GOMP_CPU_AFFINITY, while a pytorch build from source code will not have this problem. The pip-installed pytorch will use a bundled GOMP.
There is a cpp case can reproduce it.
```
#include <stdio.h>
#include <omp.h>
#include <torch/torch.h>
int main() {
printf("omp_get_max_threads %d\n", omp_get_max_threads());
printf("at::get_num_threads %d\n", at::get_num_threads());
return 0;
}
```
compile command
```g++ -I<PYTHON_INSTALL_DIR>/site-packages/torch/include/torch/csrc/api/include/ -I<PYTHON_INSTALL_DIR>/site-packages/torch/include/ -fopenmp test.cpp -o test.o -L<PYTHON_INSTALL_DIR>/site-packages/torch/lib -ltorch -ltorch_cpu -lc10 -D_GLIBCXX_USE_CXX11_ABI=0```
the result with pip install pytorch

the result with pytorch build from source code

### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 12.3.0-1ubuntu1~22.04) 12.3.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:16:10) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.47+prerelease6469.7-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 224
On-line CPU(s) list: 0-223
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8480+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
Stepping: 8
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr avx512_fp16 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 5.3 MiB (112 instances)
L1i cache: 3.5 MiB (112 instances)
L2 cache: 224 MiB (112 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-55,112-167
NUMA node1 CPU(s): 56-111,168-223
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @seemethere @malfet @osalpekar @atalman
|
open
|
2025-03-18T19:04:32Z
|
2025-03-21T02:25:30Z
|
https://github.com/pytorch/pytorch/issues/149422
|
[
"module: binaries",
"triaged"
] |
yuchengliu1
| 4
|
FactoryBoy/factory_boy
|
django
| 782
|
Circular DjangoModelFactory RelatedFactory instances are not created
|
#### Description
I have two Django models that refer to eachother. StudentPhone has a ForeignKey relationship to Student, and a Student has a primary_phone field that refers to a StudentPhone. I want a StudentFactory that uses StudentPhoneFactory to create its primary_phone field. This is exactly the scenario shown in the example https://factoryboy.readthedocs.io/en/latest/reference.html#factory.RelatedFactory . With the following models.py and test.py, the assert at the end fails.
models.py:
```python
from django.db import models
class Student(models.Model):
email = models.EmailField(blank=True)
primary_phone = models.ForeignKey(
"StudentPhone",
on_delete=models.SET_NULL,
null=True,
related_name="primary_students",
)
class StudentPhone(models.Model):
NUMBER_TYPES = (
("mobile", "Mobile"),
("home", "Home"),
)
student = models.ForeignKey(
"Student", related_name="phones", on_delete=models.CASCADE
)
type = models.CharField(max_length=16, choices=NUMBER_TYPES)
is_valid = models.BooleanField(default=True)
```
tests.py:
```python
import factory
from factory import post_generation
from factory.django import DjangoModelFactory
from main import models
# Create your tests here.
class StudentPhoneFactory(DjangoModelFactory):
class Meta:
model = models.StudentPhone
student = factory.SubFactory("test.factories.StudentFactory")
type = "mobile"
is_valid = True
class StudentFactory(DjangoModelFactory):
class Meta:
model = models.Student
email = factory.Faker("email")
primary_phone = factory.RelatedFactory(
StudentPhoneFactory, factory_related_name="student"
)
s = StudentFactory()
assert(s.primary_phone is not None)
```
Django 3.0.10
FactoryBoy 3.0.1
#### To Reproduce
See the attached django app. Run `./manage.py test main`. The `assert` should succeed but it fails.
[factorybug.zip](https://github.com/FactoryBoy/factory_boy/files/5236427/factorybug.zip)
|
open
|
2020-09-17T05:05:24Z
|
2020-10-07T01:03:14Z
|
https://github.com/FactoryBoy/factory_boy/issues/782
|
[
"Doc",
"Django"
] |
incidentist
| 3
|
modelscope/modelscope
|
nlp
| 332
|
ModelScope这个tts如何做流式呢?边转换,边播放
|
支持的,这周晚点最晚下周初github上更新一个issue会说明,其实现在code都是包含子函数的,但是我没有找到
|
closed
|
2023-06-14T01:48:24Z
|
2024-07-28T01:56:53Z
|
https://github.com/modelscope/modelscope/issues/332
|
[
"Stale",
"pending"
] |
gsn516
| 7
|
fastapi-users/fastapi-users
|
fastapi
| 747
|
get_user_manager is not usable programmatically
|
## Describe the bug
`get_user_manager` function proposed on official docs is not usable programmatically.
```
[...] in create_superuser
superuser = await user_manager.create(
AttributeError: 'generator' object has no attribute 'create'
```
## To Reproduce
Steps to reproduce the behavior:
1. Clone the full example for MongoDB available on official docs.
2. Add a function to use user_manager programmatically.
```py
async def create_superuser(email: str, password: str):
try:
user_manager = get_user_manager(get_user_db)
# user_manager = fastapi_users.get_user_manager(get_user_db)
# user_manager = fastapi_users.get_user_manager
superuser = await user_manager.create(
UserCreate(email=email, password=password, is_superuser=True)
)
print(f"Superuser created {superuser}")
except UserAlreadyExists:
print(f"Superuser {email} already exist")
```
## Expected behavior
A ready to use user_manager for CRUD operations, especially create, update & delete.
## Configuration
- Python version : 3.8.10
- FastAPI version : 0.68.1
- FastAPI Users version : 8.1.0
|
closed
|
2021-09-27T10:35:26Z
|
2021-09-30T07:15:29Z
|
https://github.com/fastapi-users/fastapi-users/issues/747
|
[
"bug"
] |
mlisthenewcool
| 3
|
python-restx/flask-restx
|
api
| 52
|
CI is broken for Coveralls
|
### **Repro Steps**
1. Create a PR
2. The tests run nicely in each environment
3. But suddenly, something goes wrong...
### **Expected Behavior**
The tests, coveralls, etc. should all pass.
### **Actual Behavior**
Coveralls does not pass
### **Error Messages/Stack Trace**
You can see an example failure [here](https://github.com/python-restx/flask-restx/pull/35/checks?check_run_id=437142210).
```
Not on TravisCI. You have to provide either repo_token in .coveralls.yml or set the COVERALLS_REPO_TOKEN env var.
Traceback (most recent call last):
File "/opt/hostedtoolcache/Python/3.6.10/x64/lib/python3.6/site-packages/coveralls/cli.py", line 61, in main
service_name=options['--service'])
File "/opt/hostedtoolcache/Python/3.6.10/x64/lib/python3.6/site-packages/coveralls/api.py", line 58, in __init__
self.ensure_token()
File "/opt/hostedtoolcache/Python/3.6.10/x64/lib/python3.6/site-packages/coveralls/api.py", line 67, in ensure_token
self.config_filename))
coveralls.exception.CoverallsException: Not on TravisCI. You have to provide either repo_token in .coveralls.yml or set the COVERALLS_REPO_TOKEN env var.
##[error]Process completed with exit code 1.
```
### **Additional Context**
* The probably fix here is to create a Github encrypted secret to match the `COVERALLS_REPO_TOKEN: ${{ secrets.COVERALLS_REPO_TOKEN }}` line in test.yml
* This is blocking the tests from passing on #35.
|
closed
|
2020-02-10T21:13:59Z
|
2020-02-11T15:09:45Z
|
https://github.com/python-restx/flask-restx/issues/52
|
[
"bug",
"ci/cd"
] |
plowman
| 3
|
aio-libs/aiomysql
|
asyncio
| 622
|
i can use create_pool to query data,but why can't i query the new data that i manually inserted
|
i can use create_pool to query data,but why can't i query the new data that i manually inserted?
|
closed
|
2021-10-08T08:59:48Z
|
2022-02-18T14:26:55Z
|
https://github.com/aio-libs/aiomysql/issues/622
|
[
"question"
] |
Ray8716397
| 1
|
pywinauto/pywinauto
|
automation
| 391
|
Window hangging issue in windows 2012 while using Pywinauto 0.4 through Jep
|
Hi,
I am using jep code to launch a application window in windows 2012 system.
Sometimes the window which launches gets hanged immediately after opening and doesn't throw any exception.
The issue is inconsistent.
Is there any solution to this issue.
Thanks,
Elora
|
closed
|
2017-07-19T06:48:18Z
|
2019-05-12T11:35:46Z
|
https://github.com/pywinauto/pywinauto/issues/391
|
[
"question"
] |
eloraparija
| 3
|
google/seq2seq
|
tensorflow
| 304
|
Could you please make the trained models available?
|
Could you please make the trained models available?
|
open
|
2017-10-17T01:37:05Z
|
2017-10-17T01:37:05Z
|
https://github.com/google/seq2seq/issues/304
|
[] |
LeenaShekhar
| 0
|
FujiwaraChoki/MoneyPrinter
|
automation
| 257
|
Error: Response 403: Cloudflare detected
|
I get an error when generating.

|
closed
|
2024-06-26T09:43:20Z
|
2024-06-30T09:04:43Z
|
https://github.com/FujiwaraChoki/MoneyPrinter/issues/257
|
[] |
bqio
| 5
|
pandas-dev/pandas
|
pandas
| 60,308
|
DOC: v2.2 offline documentation search button not work
|
### Pandas version checks
- [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/docs/pandas.zip
### Documentation problem
Mouse left click search button or Keyboard input Ctrl+k does not popup search bar.
v2.1 and v2.0 work.
https://pandas.pydata.org/pandas-docs/version/2.1/pandas.zip
https://pandas.pydata.org/pandas-docs/version/2.0/pandas.zip
### Suggested fix for documentation
Mouse left click search button or Keyboard input Ctrl+k popup search bar.
|
open
|
2024-11-14T01:09:51Z
|
2024-12-22T11:22:12Z
|
https://github.com/pandas-dev/pandas/issues/60308
|
[
"Docs"
] |
jack6th
| 3
|
pyg-team/pytorch_geometric
|
pytorch
| 10,028
|
Preserve and Subset Edge Features in KNNGraph Transform
|
### 🚀 The feature, motivation and pitch
Currently, the `KNNGraph` transform in PyTorch Geometric constructs a k-nearest neighbors and returning only the `edge_index` tensor. However, if the input graph already has edge features(`edge_attr`), the transform does not subset or retain the corresponding edge attributes for the newly generated `edge_index`. It is basically set to `None`.
I would propose to include functionality to return the subset of `edge_attr` that corresponds to the newly formed KNN graph. This can be efficiently done as follows:
```python
class KNNGraph(BaseTransform):
def __init__(
self,
k: int = 6,
loop: bool = False,
force_undirected: bool = False,
flow: str = 'source_to_target',
cosine: bool = False,
num_workers: int = 1,
) -> None:
self.k = k
self.loop = loop
self.force_undirected = force_undirected
self.flow = flow
self.cosine = cosine
self.num_workers = num_workers
def forward(self, data: Data) -> Data:
assert data.pos is not None
edge_index = torch_geometric.nn.knn_graph(
data.pos,
self.k,
data.batch,
loop=self.loop,
flow=self.flow,
cosine=self.cosine,
num_workers=self.num_workers,
)
if self.force_undirected:
edge_index = to_undirected(edge_index, num_nodes=data.num_nodes)
# Extract i and j from the subset tensor
i = edge_index [0]
j = edge_index [1]
# Calculate the index offset using the formula
index_offset = torch.where(j < i, j, j - 1)
indices = 4 * i + index_offset
data.edge_index = edge_index
data.edge_attr = data.edge_attr[indices]
return data
def __repr__(self) -> str:
return f'{self.__class__.__name__}(k={self.k})'
```
Many thanks,
### Alternatives
_No response_
### Additional context
_No response_
|
open
|
2025-02-13T15:50:47Z
|
2025-02-15T13:44:28Z
|
https://github.com/pyg-team/pytorch_geometric/issues/10028
|
[
"feature",
"transform"
] |
wesmail
| 0
|
pyppeteer/pyppeteer
|
automation
| 486
|
can not get document object in evaluate function
|
i can not get document object in evaluate function, and get None;
```ptthon
res = await page.evaluate("""document""") # None
res = await page.evaluate("""document.querySelector('body')""") # None
```
but when i use page.querySelector('body'), the result is correct.
please help me。 i try some web pages and get same question.
|
open
|
2024-11-26T09:39:06Z
|
2024-11-26T09:39:06Z
|
https://github.com/pyppeteer/pyppeteer/issues/486
|
[] |
datuizhuang
| 0
|
LibreTranslate/LibreTranslate
|
api
| 8
|
Limits to use the api on libre translate
|
Hello,
I am really interested to use the API on my project. First, I would like to introduce on a java lib that I created (https://github.com/stom79/mytransl). But, do you agree that I use your domain for my app (https://github.com/stom79/Fedilab)?
|
closed
|
2021-01-10T13:08:40Z
|
2021-01-19T07:14:01Z
|
https://github.com/LibreTranslate/LibreTranslate/issues/8
|
[] |
ghost
| 7
|
localstack/localstack
|
python
| 11,923
|
bug: LS_LOG=error is extremely verbose
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
When I start the container localstack logs all my services configurations.

### Expected Behavior
I would not expect it to happen given I'm setting LS_LOG to error
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker run localstack/localstack
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
awslocal s3 mb s3://mybucket
### Environment
```markdown
- OS: macOS
- LocalStack:
LocalStack version: latest
LocalStack Docker image sha:
LocalStack build date:
LocalStack build git hash:
```
### Anything else?
_No response_
|
closed
|
2024-11-25T16:39:56Z
|
2024-11-26T09:12:51Z
|
https://github.com/localstack/localstack/issues/11923
|
[
"type: bug",
"status: triage needed"
] |
FezVrasta
| 1
|
rasbt/watermark
|
jupyter
| 89
|
Include information about how Python was installed
|
Python can be installed in an number of ways. Limiting focus just to Linux, for example:
* Official system package
* On Ubuntu, deadsnakes PPA
* pyenv
* Conda (official default and conda-forge)
* `python` Docker image
Knowing how Python was installed can be useful at times for tools that have more intrusive integration with the interpreter, e.g. profilers like https://pythonspeed.com/fil/ or https://sciagraph.com.
If I submitted a PR to add this, would you be willing to accept it? I'd probably do Linux only, as a first pass, with a limited number of Linux distributions, because I don't know as much about macOS/Windows install mechanisms.
|
open
|
2022-10-12T14:26:03Z
|
2022-11-07T00:43:32Z
|
https://github.com/rasbt/watermark/issues/89
|
[] |
itamarst
| 2
|
CorentinJ/Real-Time-Voice-Cloning
|
pytorch
| 656
|
Colab notebook is out of date
|
The colab notebook needs an update to keep up with the changes in #472.
Because no one is maintaining the notebook, I suggest we delete it. This will also reduce support questions for which we're not able to answer.
|
closed
|
2021-02-14T16:35:00Z
|
2021-02-14T20:55:55Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/656
|
[] |
ghost
| 0
|
deepinsight/insightface
|
pytorch
| 1,992
|
RuntimeError: CUDA out of memory
|
您好,我在训练glint360数据集的时候使用8卡V100 32G显存的一台服务器,batch_size设置为512的时候,第一张显卡显存会爆,提示RuntimeError: CUDA out of memory,但是剩余7张显卡都没有用满32G显存,请问下这个问题怎么解决呢?期待您的回复,十分感谢!
|
closed
|
2022-04-29T00:53:59Z
|
2022-04-30T02:09:25Z
|
https://github.com/deepinsight/insightface/issues/1992
|
[] |
quanh1990
| 2
|
d2l-ai/d2l-en
|
data-science
| 2,431
|
What is #@save mean and how does it work?
|
In the book, teacher says @save is used to save the function we design. I wonder how it work? Is it a character of python? Or it is designed by d2l?
There is a code here: https://github.com/d2l-ai/d2l-en/blob/master/d2l/torch.py
So the same name function will overwrite function here?
|
closed
|
2023-01-27T06:27:21Z
|
2023-01-29T13:00:00Z
|
https://github.com/d2l-ai/d2l-en/issues/2431
|
[] |
likefallwind
| 2
|
sinaptik-ai/pandas-ai
|
data-science
| 1,384
|
Error in Generating Text Output with Semantic Agents
|
While working with Semantic Agents, I want the output in text format. However, I am facing the following error. What can I do to get the output in text using Semantic Agents?
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
<img width="477" alt="Capture (1)" src="https://github.com/user-attachments/assets/8980affb-7835-47c6-ac21-299d196128f8">
|
closed
|
2024-10-01T07:30:45Z
|
2025-01-07T16:08:20Z
|
https://github.com/sinaptik-ai/pandas-ai/issues/1384
|
[
"bug"
] |
muhammadshera71
| 1
|
vitalik/django-ninja
|
pydantic
| 938
|
[BUG] @model_validator(mode="before") issue
|
**Describe the bug**
When using something like this, `values` is a `DjangoGetter` instance which is somewhat unexpected to me. Prior to 1.0 and the pydantic update you would get a plain dictionary.
```python
class SomeSchema(Schema):
somevar:int
@model_validator(mode="before")
@classmethod
def foo(cls, values):
values.get("something")
```
**Versions (please complete the following information):**
- Python version: 3.10
- Django version: 4.1
- Django-Ninja version: 1.0.1
- Pydantic version: 2.5.1
|
open
|
2023-11-20T23:04:02Z
|
2023-12-21T05:46:20Z
|
https://github.com/vitalik/django-ninja/issues/938
|
[] |
shughes-uk
| 2
|
Lightning-AI/pytorch-lightning
|
machine-learning
| 19,618
|
Validation runs during overfit, even when turned off
|
### Bug description
I am attempting to overfit a model for demonstration. I am using the CLI and I am using trainer.overfit_batches=.125 and trainer.limit_val_batches=0.
If trainer.limit_val_batches=0 is run without the overfit_batches, the desired effect of turning off the validation dataloader and epoch is achieved.
### What version are you seeing the problem on?
v2.1
### How to reproduce the bug
```python
python cli.py fit -c config.yaml --trainer.limit_val_batches=0 --trainer.overfit_batches=.125
```
```
### Error messages and logs
```
No error message, undesired behavior
```
### Environment
<details>
<summary>Current environment</summary>
```
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow):
#- PyTorch Lightning Version (e.g., 1.5.0): 2.1
#- Lightning App Version (e.g., 0.5.2): N/A
#- PyTorch Version (e.g., 2.0):2.1
#- Python version (e.g., 3.9): 3.10
#- OS (e.g., Linux): Linux
#- CUDA/cuDNN version: 12.1
#- GPU models and configuration: A100
#- How you installed Lightning(`conda`, `pip`, source): PIP
#- Running environment of LightningApp (e.g. local, cloud): SLURM
```
</details>
### More info
_No response_
cc @borda @justusschock @awaelchli
|
closed
|
2024-03-12T14:27:16Z
|
2024-03-13T21:47:57Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/19618
|
[
"help wanted",
"docs",
"working as intended",
"trainer",
"ver: 2.1.x"
] |
tkella47
| 2
|
LibrePhotos/librephotos
|
django
| 589
|
Date range selector
|
**Describe the enhancement you'd like**
I have uploaded thousands of pictures that I hope to use LibrePhotos to view, to replace Apple/Microsoft cloud based services, which I currently use. Both of these have an easy way to navigate to a point in time you wish to look at.
If I wanted to, for instance, go back to 2015, it looks like I would need to either make an album manually for that year, or take a lengthy amount of time to manually scroll down to that date. Dates do not appear when scrolling, so I need to wait for LibrePhotos to catch up and load before I know where I am at.
Can we have a better way to quickly scroll to a month/year that we wanted to view?
**Describe why this will benefit the LibrePhotos**
This will help LibrePhotos users to easily navigate to a specific date in time they wish to view photos from.
**Additional context**
Add any other context or screenshots about the enhancement request here.
I don't have any screenshots unless you want my to upload an image of other web apps, such as iCloud (just quickly displaying the date you're scrolling through) or OneDrive (a nice date range right by the scrollbar). It can be something completely different and unique ... Even some sort of a date range selector in a text field, calendar drop downs, etc. Just a way to quickly jump to a date in the past.
|
open
|
2022-08-06T17:34:12Z
|
2022-11-15T14:38:59Z
|
https://github.com/LibrePhotos/librephotos/issues/589
|
[
"enhancement"
] |
ubghacking
| 1
|
tflearn/tflearn
|
data-science
| 806
|
Why is tflearn.data_utils.shuffle() not used in all CIFAR-10 Examples?
|
In the **covnet_cifar10.py** and **network_in_network.py** examples the CIFAR-10 data is shuffled after its loaded using the `tflearn.data_utils.shuffle()` function:
`from tflearn.datasets import cifar10`
`(X, Y), (X_test, Y_test) = cifar10.load_data()`
`X, Y = shuffle(X, Y)`
However, in the **residual_network_cifar10.py** and **resnext_cifar10.py** examples this step is not taken after the data is loaded.
Is there a reason why this shuffle step is not included in these examples?
Is it just that the data is not required to be shuffled for these models to work? Or, is the shuffling of the data taking place during the `.fit()` training where the shuffle parameter is set to true `shuffle=True`?
|
open
|
2017-06-22T16:39:56Z
|
2017-06-26T03:21:19Z
|
https://github.com/tflearn/tflearn/issues/806
|
[] |
rhammell
| 1
|
voxel51/fiftyone
|
computer-vision
| 4,798
|
[FR] How to host Fiftyone in AWS Amplify system?
|
### Instructions
[AWS Amplify](https://aws.amazon.com/amplify) is a full stack development platform for web and mobile apps. I am not sure if we can host this in Amplify. If Fifityone can already host on Amplify, please let me know if there is some documentation.
### Proposal Summary
Allow hosting Fiftyone App in AWS Amplify.
### What areas of FiftyOne does this feature affect?
- [x] App: FiftyOne application
- [ ] Core: Core `fiftyone` Python library
- [x] Server: FiftyOne server
### Details
Currently if you want to host any apps on Amplify, you can just start a branch in your repo with hosting information. Currently it has a few supported framework: https://docs.aws.amazon.com/amplify/latest/userguide/welcome.html and I am not sure if Fiftyone is eligable for that.
If not, I want to know what is the best practice to start hosting Fiftyone in an internal environment. Is it:
1. Start an instance or a server.
2. Start Fiftyone Service. (any best practice here? such as launch automaticly and start from a repo branch)
3. Then export instance or server IP to internal only?
Thank you
### Willingness to contribute
The FiftyOne Community welcomes contributions! Would you or another member of your organization be willing to contribute an implementation of this feature?
- [ ] Yes. I can contribute this feature independently
- [x] Yes. I would be willing to contribute this feature with guidance from the FiftyOne community
- [ ] No. I cannot contribute this feature at this time
|
open
|
2024-09-13T22:02:47Z
|
2024-09-13T22:02:47Z
|
https://github.com/voxel51/fiftyone/issues/4798
|
[
"feature"
] |
WuZhuoran
| 0
|
SYSTRAN/faster-whisper
|
deep-learning
| 894
|
Update to Silero v5
|
Silero v5 was recently released https://github.com/snakers4/silero-vad/tree/v5.0, and faster-whisper is still using v4.
|
closed
|
2024-07-02T13:19:12Z
|
2024-07-02T14:23:37Z
|
https://github.com/SYSTRAN/faster-whisper/issues/894
|
[] |
dorinclisu
| 2
|
ageitgey/face_recognition
|
python
| 1,473
|
Save face in db
|
Hello everyone, tell me, can I somehow save the recognized face to the database and then get it from there to compare with the photo, I will be glad if some examples are presented
|
closed
|
2023-02-17T00:54:10Z
|
2023-02-27T19:12:40Z
|
https://github.com/ageitgey/face_recognition/issues/1473
|
[] |
Skreiphoff
| 1
|
dask/dask
|
numpy
| 11,188
|
Bug in map_blocks when iterating over multiple arrays
|
**Describe the issue**:
When iterating over two arrays with map_blocks, I am running into an issue where each call to the function to process each chunk gets chunks with mismatching dimensions.
**Minimal Complete Verifiable Example**:
```python
from dask import array as da
array1 = da.zeros((100, 24)).rechunk((41, 24))
array2 = da.zeros((100,)).rechunk((41,))
def process_chunk(a, b):
print(a.shape, b.shape)
return a + b[:, None]
r = da.map_blocks(process_chunk, array1, array2)
r.compute()
```
This gives:
```
(0, 0) (0,)
(1, 1) (1,)
(18, 24) (18,)
(18, 24) (41,)
(18, 24) (41,)
(41, 24) (41,)
(41, 24) (18,)
Traceback (most recent call last):
(41, 24) (18,)
File "/home/tom/Projects/parallel-modeling-examples/eis/minimal.py", line 11, in <module>
(41, 24) (41,)
(41, 24) (41,)
r.compute()
(41, 24) (41,)
File "/home/tom/python/dev/lib/python3.12/site-packages/dask/base.py", line 376, in compute
(result,) = compute(self, traverse=False, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tom/python/dev/lib/python3.12/site-packages/dask/base.py", line 662, in compute
results = schedule(dsk, keys, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tom/Projects/parallel-modeling-examples/eis/minimal.py", line 8, in process_chunk
return a + b[:, None]
~~^~~~~~~~~~~~
ValueError: operands could not be broadcast together with shapes (18,24) (41,1)
```
Normally there should only be three sets of chunks to iterate over - two with size 41 and one with size 18. However, it looks like more workers get called than this, and some of them have mismatching dimensions for a and be (41 for one and 18 for the other).
**Environment**:
- Dask version: 2024.6.0
- Python version: 3.12.4
- Operating System: Ubuntu
- Install method (conda, pip, source): pip
|
closed
|
2024-06-19T12:45:45Z
|
2025-01-13T13:12:56Z
|
https://github.com/dask/dask/issues/11188
|
[
"array"
] |
astrofrog
| 2
|
alteryx/featuretools
|
scikit-learn
| 2,419
|
Standardize Regular Expressions for Natural Language Primitives
|
Multiple primitives make use of regular expressions. Some of them define what punctuation to delimit on. We should standardize these. This would give users the confidence that Primitive A does not consider a string to be one word, while Primitive B considers it to be two. We could define the regexes in a common file and import them in the primitives.
This was originally an issue on `nlp-primitives` before the primitives were moved here.
|
closed
|
2022-12-20T00:20:38Z
|
2023-01-03T17:35:13Z
|
https://github.com/alteryx/featuretools/issues/2419
|
[
"refactor"
] |
sbadithe
| 0
|
erdewit/ib_insync
|
asyncio
| 328
|
Order held while securities are located
|
This is taken from ib_insync log:
Error 404, reqId 41686: Order held while securities are located.
Canceled order: Trade(contract=Contract(secType='STK', conId=416854744, symbol='DUST', exchange='SMART', primaryExchange='ARCA', currency='USD', localSymbol='DUST', tradingClass='DUST'), order=MarketOrder(orderId=41686, action='SELL', totalQuantity=109), orderStatus=OrderStatus(orderId=41686, status='Cancelled', filled=0, remaining=0, avgFillPrice=0.0, permId=0, parentId=0, lastFillPrice=0.0, clientId=0, whyHeld='', mktCapPrice=0.0), fills=[], log=[TradeLogEntry(time=datetime.datetime(2021, 1, 11, 20, 50, 5, 653588, tzinfo=datetime.timezone.utc), status='PendingSubmit', message=''), TradeLogEntry(time=datetime.datetime(2021, 1, 11, 20, 50, 7, 176698, tzinfo=datetime.timezone.utc), status='Cancelled', message='Error 404, reqId 41686: Order held while securities are located.')])
The problem is: ib_insync receives errorcode 404 and mark the order as Cancelled. However, this behaviour is incorrect, since the order is still open in TWS after receiving 404. This errorcode seems to be a notification and not an actual error.
|
closed
|
2021-01-12T06:57:12Z
|
2021-01-12T09:49:44Z
|
https://github.com/erdewit/ib_insync/issues/328
|
[] |
bandipapa
| 0
|
fastapi/sqlmodel
|
pydantic
| 221
|
Circular Imports with Relationship
|
### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
# post.py
from typing import TYPE_CHECKING, Optional
from sqlmodel import Field, Relationship, SQLModel
if TYPE_CHECKING:
from models.user import UserRead
class PostBase(SQLModel):
title: str = Field(index=True)
desc: str
content: str
class Post(PostBase, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
author_id: int = Field(default=None, foreign_key="user.id")
author: "UserRead" = Relationship(back_populates="posts")
class PostRead(PostBase):
id: int
class PostReadWithAuthor(PostRead):
author: "UserRead"
```
```python
#user.py
from typing import TYPE_CHECKING, List, Optional
from pydantic import EmailStr, HttpUrl
from sqlmodel import Field, Relationship, SQLModel
if TYPE_CHECKING:
from models.post import PostRead
class UserBase(SQLModel):
nickname: str
avatar_url: HttpUrl
class User(UserBase, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
email: EmailStr = Field(index=True)
password_hash: str
posts: List["PostRead"] = Relationship(back_populates="author")
class UserRead(UserBase):
id: int
email: EmailStr
class UserReadWithPosts(UserRead):
posts: List["PostRead"]
```
### Description
I refer to the [tutorial](https://sqlmodel.tiangolo.com/tutorial/code-structure/) to use TYPE_CHECKING to handle import Relationships between different models. However, after setting up Relationship, when I opened the API document ( `localhost:8000/docs` ), I got an TypeError:
```
File "/xxxxxxxxxx/python3.9/site-packages/fastapi/utils.py", line 24, in get_model_definitions
m_schema, m_definitions, m_nested_models = model_process_schema(
File "pydantic/schema.py", line 617, in pydantic.schema.model_process_schema
File "pydantic/schema.py", line 658, in pydantic.schema.model_type_schema
File "pydantic/schema.py", line 258, in pydantic.schema.field_schema
File "pydantic/schema.py", line 563, in pydantic.schema.field_type_schema
File "pydantic/schema.py", line 922, in pydantic.schema.field_singleton_schema
File "/xxxxxxxxxx/python3.9/abc.py", line 123, in __subclasscheck__
return _abc_subclasscheck(cls, subclass)
TypeError: issubclass() arg 1 must be a class
```
It works normally when I don't use TYPE_CHECKING, but I can't use Relationship on both sides at the same time.
I'm not sure if this is Pydanic or FastAPI's problem. Is there any way to define such a model in different files?
### Operating System
macOS
### Operating System Details
_No response_
### SQLModel Version
0.0.6
### Python Version
3.9.9
### Additional Context
_No response_
|
closed
|
2022-01-13T16:10:56Z
|
2022-01-14T06:57:45Z
|
https://github.com/fastapi/sqlmodel/issues/221
|
[
"question"
] |
laipz8200
| 1
|
nerfstudio-project/nerfstudio
|
computer-vision
| 2,717
|
floating things when training ngp model on ficus scene
|
Hi~I believe nerfstudio are using nerfacc to train instant-ngp models of nerf-synthetic dataset, when training on ficus dataset, I notice there are floating things in the scene cube, which hurt the psnr drastically, whereas no such things exist in scene like lego.
Here is a depth map picture rendered from ficus dataset:
<img width="715" alt="1" src="https://github.com/nerfstudio-project/nerfstudio/assets/48342903/e2fe5d2c-f38e-4de4-9935-d7c3fd37c3b7">
any idea on how to fix this? Thanks
Here is a copy of my config:
ngp_config = MethodSpecification(
config=TrainerConfig(
method_name="ngp",
steps_per_eval_batch=200,
steps_per_save=2000,
steps_per_eval_all_images=2000,
max_num_iterations=20000,
mixed_precision=False,
use_grad_scaler=False,
pipeline=DynamicBatchPipelineConfig(
datamanager=VanillaDataManagerConfig(
dataparser=BlenderDataParserConfig()
),
target_num_samples = 1<<18,
max_num_samples_per_ray = 2**8,
model=SegNGPModelConfig(eval_num_rays_per_chunk=8192),
),
optimizers={
"fields": {
"optimizer": AdamOptimizerConfig(lr=1e-2, eps=1e-15, weight_decay=1e-5), # 1e-5 if in ["materials", "ficus", "drums"] else 1e-6
"scheduler": MultiStepSchedulerConfig(
max_steps=20000,
gamma=0.33,
milestones=(10000, 15000, 18000) # need LinearLR ?
),
},
},
vis="wandb",
),
description="Segment NGP config",
)
|
open
|
2024-01-03T11:35:29Z
|
2024-01-03T11:35:29Z
|
https://github.com/nerfstudio-project/nerfstudio/issues/2717
|
[] |
Moreland-cas
| 0
|
Kanaries/pygwalker
|
plotly
| 257
|
Hide Visual Interface for Data Exploration in production environment
|
Hi,
I want to use pygwalker for dashboard creation in streamlit.
In development mode, I want Visual Interface for Data Exploration and chart creation.
However, I want to hide same in production environment. I just need chart with data filter.
Please guide me.
|
closed
|
2023-10-07T03:54:48Z
|
2023-10-21T14:13:32Z
|
https://github.com/Kanaries/pygwalker/issues/257
|
[
"enhancement",
"P1"
] |
mehtatejas
| 3
|
comfyanonymous/ComfyUI
|
pytorch
| 6,679
|
AssertionError: Torch not compiled with CUDA enabled
|
### Expected Behavior
Problem after updating ComfyUI using : update_comfyui_and_python_dependencies.bat. Then when i open it with :run_nvidia_gpu.bat it shows me this error. There was no problem when i checked it 2 weeks ago.
### Actual Behavior
c:\A\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build
[START] Security scan
[DONE] Security scan
## ComfyUI-Manager: installing dependencies done.
** ComfyUI startup time: 2025-02-02 19:05:50.830
** Platform: Windows
** Python version: 3.11.8 (tags/v3.11.8:db85d51, Feb 6 2024, 22:03:32) [MSC v.1937 64 bit (AMD64)]
** Python executable: c:\A\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\python_embeded\python.exe
** ComfyUI Path: c:\A\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI
** User directory: C:\A\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI\user
** ComfyUI-Manager config path: C:\A\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI\user\default\ComfyUI-Manager\config.ini
** Log path: C:\A\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI\user\comfyui.log
Prestartup times for custom nodes:
4.7 seconds: C:\A\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI\custom_nodes\ComfyUI-Manager
Checkpoint files will always be loaded safely.
Traceback (most recent call last):
File "c:\A\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI\main.py", line 136, in <module>
import execution
File "c:\A\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI\execution.py", line 13, in <module>
import nodes
File "c:\A\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI\nodes.py", line 22, in <module>
import comfy.diffusers_load
File "c:\A\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI\comfy\diffusers_load.py", line 3, in <module>
import comfy.sd
File "c:\A\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI\comfy\sd.py", line 6, in <module>
from comfy import model_management
File "c:\A\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI\comfy\model_management.py", line 166, in <module>
total_vram = get_total_memory(get_torch_device()) / (1024 * 1024)
^^^^^^^^^^^^^^^^^^
File "c:\A\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI\comfy\model_management.py", line 129, in get_torch_device
return torch.device(torch.cuda.current_device())
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\A\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\python_embeded\Lib\site-packages\torch\cuda\__init__.py", line 971, in current_device
_lazy_init()
File "c:\A\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\python_embeded\Lib\site-packages\torch\cuda\__init__.py", line 310, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
c:\A\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu>pause
Press any key to continue . . .
Problem after updating ComfyUI using : update_comfyui_and_python_dependencies.bat. Then when i open it with :run_nvidia_gpu.bat it shows me this error. There was no problem when i checked it 2 weeks ago.
### Steps to Reproduce
Run update and open: run_nvidia_gpu.bat
### Debug Logs
```powershell
Do not know how to get Debug because comfyui is not opening.
```
### Other
_No response_
|
closed
|
2025-02-02T18:11:34Z
|
2025-02-02T20:06:52Z
|
https://github.com/comfyanonymous/ComfyUI/issues/6679
|
[
"Potential Bug"
] |
MarcinSoj
| 3
|
mlfoundations/open_clip
|
computer-vision
| 564
|
why text unimodal of coca don't use casual mask self-attention?
|
Thanks for your great works. I have several questions on the coca model.
1. In the original paper, both the unimoal and the multimodal use causally-masked self-attention. However, the implement of unimodal in this repo use clip. If you donnot use causally-masked, the caption loss to be improperly computed since later words can see previous words?
"That is, the bottom n_uni unimodal decoder layers encode the input text as latent vectors with **causally-masked self-attention**, and the top n_multi multimodal layers further apply **causally-masked self-attention** and together with cross-attention to the output of the visual encoder. " (from original paper.)

2. In the README, the finetune script of finetuning coca model set "coca-contrastive-loss-weight 0", why not use clip-loss?
|
closed
|
2023-07-05T11:37:27Z
|
2023-07-06T03:59:43Z
|
https://github.com/mlfoundations/open_clip/issues/564
|
[] |
PanXiebit
| 2
|
deezer/spleeter
|
tensorflow
| 96
|
[Bug] Different audio duration after wave to mp3
|
After converting the input mp3 file to wave the duration changes:
```
(.env) ip-192-168-23-184:spleeter loretoparisi$ ffprobe -i output/test/vocals.mp3
ffprobe version 4.0 Copyright (c) 2007-2018 the FFmpeg developers
built with Apple LLVM version 9.1.0 (clang-902.0.39.1)
configuration: --prefix=/usr/local/Cellar/ffmpeg/4.0 --enable-shared --enable-pthreads --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-gpl --enable-libmp3lame --enable-libx264 --enable-libxvid --enable-opencl --enable-videotoolbox --disable-lzma
libavutil 56. 14.100 / 56. 14.100
libavcodec 58. 18.100 / 58. 18.100
libavformat 58. 12.100 / 58. 12.100
libavdevice 58. 3.100 / 58. 3.100
libavfilter 7. 16.100 / 7. 16.100
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 1.100 / 5. 1.100
libswresample 3. 1.100 / 3. 1.100
libpostproc 55. 1.100 / 55. 1.100
Input #0, mp3, from 'output/test/vocals.mp3':
Metadata:
encoder : Lavf58.12.100
**Duration: 00:03:27.78, start: 0.025057, bitrate: 128 kb/s**
Stream #0:0: Audio: mp3, 44100 Hz, stereo, fltp, 128 kb/s
Metadata:
encoder : Lavc58.18
```
while the wave file was
```
(.env) ip-192-168-23-184:spleeter loretoparisi$ ffprobe -i output/test/vocals.wav
ffprobe version 4.0 Copyright (c) 2007-2018 the FFmpeg developers
built with Apple LLVM version 9.1.0 (clang-902.0.39.1)
configuration: --prefix=/usr/local/Cellar/ffmpeg/4.0 --enable-shared --enable-pthreads --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-gpl --enable-libmp3lame --enable-libx264 --enable-libxvid --enable-opencl --enable-videotoolbox --disable-lzma
libavutil 56. 14.100 / 56. 14.100
libavcodec 58. 18.100 / 58. 18.100
libavformat 58. 12.100 / 58. 12.100
libavdevice 58. 3.100 / 58. 3.100
libavfilter 7. 16.100 / 7. 16.100
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 1.100 / 5. 1.100
libswresample 3. 1.100 / 3. 1.100
libpostproc 55. 1.100 / 55. 1.100
Input #0, wav, from 'output/test/vocals.wav':
Metadata:
encoder : Lavf58.12.100
Duration: 00:03:27.75, bitrate: 1411 kb/s
Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, 2 channels, s16, 1411 kb/s
```
|
closed
|
2019-11-14T17:33:40Z
|
2019-11-14T22:41:33Z
|
https://github.com/deezer/spleeter/issues/96
|
[
"bug",
"help wanted",
"question"
] |
loretoparisi
| 3
|
d2l-ai/d2l-en
|
data-science
| 2,618
|
[suggestion of presentation] pre-norm transformers
|
the figure&description in transformer.ipynb follows its historical post-norm version. Since (almost?) all modern projects uses pre-norm transformers, maybe it'd be presented head-up, not until in ViT section, to give newbies (like me) an optimized first-impression, and leave the historically significant version to a lesser position.
Thanks! cheers for your great book!
|
open
|
2024-08-30T13:57:12Z
|
2024-08-30T13:57:12Z
|
https://github.com/d2l-ai/d2l-en/issues/2618
|
[] |
jkpjkpjkp
| 0
|
albumentations-team/albumentations
|
deep-learning
| 2,292
|
[Speed up] Speed up Elastic
|
Benchmark shows that `imgaug` has faster Elastic implementations => need to learn from it and fix.
|
open
|
2025-01-24T15:56:02Z
|
2025-02-17T01:19:47Z
|
https://github.com/albumentations-team/albumentations/issues/2292
|
[
"Speed Improvements"
] |
ternaus
| 2
|
mlfoundations/open_clip
|
computer-vision
| 167
|
Add jit save for ViT-H and ViT-g please
|
I tried new pretrained models, but they cannot be saved with torch jit script:
```model, _, preprocess = open_clip.create_model_and_transforms('ViT-g-14', 'laion2b_s12b_b42k', jit=True)```
Output:
```
File "/opt/conda/lib/python3.7/site-packages/open_clip/model.py", line 326
return self.attn(x, x, x, need_weights=False, attn_mask=attn_mask)[0]
else:
return self.attn(x, attn_mask=attn_mask)
~~~~~~~~~ <--- HERE
'ResidualAttentionBlock.attention' is being compiled since it was called from 'ResidualAttentionBlock.forward'
File "/opt/conda/lib/python3.7/site-packages/open_clip/model.py", line 329
def forward(self, x: torch.Tensor, attn_mask: Optional[torch.Tensor] = None):
x = x + self.ln_attn(self.attention(self.ln_1(x), attn_mask=attn_mask))
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
x = x + self.mlp(self.ln_2(x))
return x
```
|
closed
|
2022-09-16T08:49:37Z
|
2022-09-16T15:06:27Z
|
https://github.com/mlfoundations/open_clip/issues/167
|
[] |
25icecreamflavors
| 1
|
google-research/bert
|
tensorflow
| 695
|
1
|
1
|
closed
|
2019-06-13T07:34:52Z
|
2019-06-13T07:43:57Z
|
https://github.com/google-research/bert/issues/695
|
[] |
sbmark
| 0
|
httpie/cli
|
api
| 1,105
|
[Snap] Remove unused Pygments lexers
|
In the Snap package, we are bundling a lot of `Pygments` lexers that will never be used. Maybe should we take time to remove them at some point.
Here is the revelant output of `$ snapcraft --debug` (that builds the package):
```
(...)
Listing '/root/prime/lib/python3.8/site-packages/pygments'...
(...)
Listing '/root/prime/lib/python3.8/site-packages/pygments/lexers'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/__init__.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/_asy_builtins.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/_cl_builtins.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/_cocoa_builtins.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/_csound_builtins.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/_julia_builtins.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/_lasso_builtins.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/_lua_builtins.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/_mapping.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/_mql_builtins.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/_mysql_builtins.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/_openedge_builtins.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/_php_builtins.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/_postgres_builtins.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/_scilab_builtins.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/_sourcemod_builtins.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/_stan_builtins.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/_stata_builtins.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/_tsql_builtins.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/_usd_builtins.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/_vbscript_builtins.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/_vim_builtins.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/actionscript.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/agile.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/algebra.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/ambient.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/amdgpu.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/ampl.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/apdlexer.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/apl.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/archetype.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/arrow.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/asm.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/automation.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/bare.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/basic.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/bibtex.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/boa.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/business.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/c_cpp.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/c_like.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/capnproto.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/cddl.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/chapel.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/clean.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/compiled.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/configs.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/console.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/crystal.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/csound.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/css.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/d.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/dalvik.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/data.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/devicetree.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/diff.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/dotnet.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/dsls.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/dylan.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/ecl.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/eiffel.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/elm.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/email.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/erlang.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/esoteric.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/ezhil.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/factor.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/fantom.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/felix.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/floscript.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/forth.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/fortran.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/foxpro.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/freefem.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/functional.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/futhark.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/gcodelexer.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/gdscript.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/go.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/grammar_notation.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/graph.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/graphics.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/graphviz.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/haskell.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/haxe.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/hdl.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/hexdump.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/html.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/idl.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/igor.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/inferno.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/installers.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/int_fiction.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/iolang.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/j.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/javascript.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/julia.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/jvm.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/kuin.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/lisp.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/make.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/markup.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/math.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/matlab.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/mime.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/ml.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/modeling.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/modula2.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/monte.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/mosel.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/ncl.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/nimrod.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/nit.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/nix.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/oberon.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/objective.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/ooc.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/other.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/parasail.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/parsers.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/pascal.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/pawn.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/perl.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/php.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/pointless.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/pony.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/praat.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/prolog.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/promql.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/python.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/qvt.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/r.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/rdf.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/rebol.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/resource.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/ride.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/rnc.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/roboconf.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/robotframework.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/ruby.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/rust.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/sas.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/scdoc.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/scripting.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/sgf.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/shell.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/sieve.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/slash.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/smalltalk.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/smv.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/snobol.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/solidity.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/special.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/sql.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/stata.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/supercollider.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/tcl.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/teal.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/templates.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/teraterm.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/testing.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/text.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/textedit.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/textfmts.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/theorem.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/thingsdb.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/tnt.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/trafficscript.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/typoscript.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/unicon.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/urbi.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/usd.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/varnish.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/verification.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/web.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/webassembly.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/webidl.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/webmisc.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/whiley.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/x10.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/xorg.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/yang.py'...
Compiling '/root/prime/lib/python3.8/site-packages/pygments/lexers/zig.py'...
(...)
```
|
closed
|
2021-07-05T12:01:07Z
|
2021-08-27T11:07:03Z
|
https://github.com/httpie/cli/issues/1105
|
[
"enhancement",
"packaging"
] |
BoboTiG
| 3
|
albumentations-team/albumentations
|
machine-learning
| 2,447
|
[Feature request] Add apply_to_images to CenterCrop
|
open
|
2025-03-11T01:21:46Z
|
2025-03-11T01:21:52Z
|
https://github.com/albumentations-team/albumentations/issues/2447
|
[
"enhancement",
"good first issue"
] |
ternaus
| 0
|
|
huggingface/pytorch-image-models
|
pytorch
| 2,110
|
[FEATURE] Add image backbones from `MobileCLIP` paper
|
[MobileCLIP](https://github.com/apple/ml-mobileclip/) is a really fast CLIP architecture for mobile inference - about 3x faster than the fastest publicly available CLIP backbone `convnext_base_w` for inference on iOS / macOS devices.
They introduce 3 novel image backbones: `mci{0|1|2}`. It would be amazing if these models were available directly via `timm`. I believe this would be an essential first step towards getting it into `open_clip` for fine-tuning.
The arch, defined [here](https://github.com/apple/ml-mobileclip/blob/db324e3d5464a9d93ae21281e40f8a60b619488b/mobileclip/models/mci.py#L858-L933), uses MobileOne and FastVIT components, which are already available in `timm`. I'm not sure how compatible the re-implementation there is with the existing one in `timm` out of the box, but it smells like integration is definitely possible.
|
closed
|
2024-03-16T10:24:56Z
|
2024-06-14T19:29:02Z
|
https://github.com/huggingface/pytorch-image-models/issues/2110
|
[
"enhancement"
] |
rsomani95
| 7
|
wkentaro/labelme
|
deep-learning
| 859
|
Labelling z-stack images
|
I would like to label microscopic images that come as z-stack of several focus levels.
LabelMe does open such images but only shows the first layer. Is there any possibility to browse the other layers as well? Or could such option be added?
|
closed
|
2021-04-15T10:04:27Z
|
2022-06-25T04:44:12Z
|
https://github.com/wkentaro/labelme/issues/859
|
[] |
MartinTheuerkauf
| 0
|
horovod/horovod
|
tensorflow
| 3,343
|
[Error] horovod.common.exceptions.HorovodInternalError: ncclCommInitRank failed: unhandled system error
|
Hello, I am using the latest horovod docker for training deep learning models:
**Environment:**
1. Framework: (PyTorch)
2. Framework version:
3. Horovod version: Horovod v0.22.1
4. MPI version: mpirun (Open MPI) 3.0.0
5. CUDA version: cuda_11.2
6. NCCL version:
7. Python version: Python 3.7.5
10. OS and version: Ubuntu 18.04
11. GCC version: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
12. CMake version: cmake version 3.10.2
**Checklist:**
1. Did you search issues to find if somebody asked this question before? -- Yes, but not help.
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)? -- Yes.
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)? -- Yes
4. Did you check if you question is answered in the [troubleshooting guide] -- Yes (https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)?
**Bug report:**
Hello, I am using the latest horovod docker for training deep learning models on a single machine:
I run the following command:
$ nvidia-docker exec -it stoic_hermann bash
root@d7944d7xxxx: /PATH/TO/TRAIN/CODE# horovodrun -np 6 -H localhost:6 python train.py
AND got the error like:
[1,3]<stddiag>:[d7944d77ef3c:16492] Read -1, expected 27600, errno = 1
[1,4]<stddiag>:[d7944d77ef3c:16493] Read -1, expected 27601, errno = 1
[1,0]<stderr>:Traceback (most recent call last):
[1,0]<stderr>: File "/usr/local/lib/python3.7/dist-packages/horovod/torch/mpi_ops.py", line 878, in synchronize
[1,0]<stderr>: mpi_lib.horovod_torch_wait_and_clear(handle)
[1,0]<stderr>:RuntimeError: ncclCommInitRank failed: unhandled system error
Full error report:
[log-sub.txt](https://github.com/horovod/horovod/files/7806535/log-sub.txt)
May I ask how to solve this problem?
Thanks!!
Best,
Ross.
|
closed
|
2022-01-04T09:14:28Z
|
2022-01-04T13:06:59Z
|
https://github.com/horovod/horovod/issues/3343
|
[] |
RossStream
| 5
|
pytest-dev/pytest-qt
|
pytest
| 430
|
BUG: Click Away from MenuBar does not Close MenuBar
|
### System
**OS:** Windows 10
**Python:** 3.8.10
**Qt:** PySide6
**pytest-qt:** 4.0.2
### Problem
After clicking a `QMenuBar` item, clicking away from the menu does not close it when using `qtbot`.
### MRE
#### `myproj/main.py`
```python
from qtpy import QtWidgets
class View(QtWidgets.QMainWindow):
"""The main window."""
def __init__(self):
super().__init__()
self.container = QtWidgets.QFrame()
self.layout_ = QtWidgets.QVBoxLayout()
self.layout_.setSpacing(0)
self.layout_.setContentsMargins(0, 0, 0, 0)
self.container.setLayout(self.layout_)
self.setCentralWidget(self.container)
self._create_actions()
self._create_menubar()
def _create_actions(self):
self.new_action = QtWidgets.QAction(
'&New Project...',
self,
)
def _create_menubar(self) -> None:
self.menubar = self.menuBar()
self.file_menu = self.menubar.addMenu('&File')
self.file_menu.addAction(self.new_action)
if __name__ == '__main__':
app = QtWidgets.QApplication([])
window = View()
window.showMaximized()
app.exec_()
```
#### `tests/test_window.py`
```python
import pytest
from myproj.main import View
from pytestqt.qtbot import QtBot
from qtpy import QtCore
@pytest.fixture(name='window', scope='function')
def fixture_app(qtbot):
window = View()
window.setMouseTracking(True)
window.showMaximized()
yield window
def test_click_outside_menu_closes_menu(window: 'View', qtbot: 'QtBot') -> None:
"""Tests that menu closes when user clicks away from it.
Args:
window (view): (fixture) Qt application main window
"""
menubar = window.menubar
file_menu = window.file_menu
new_action = window.new_action
file_rect = menubar.actionGeometry(file_menu.menuAction())
new_rect = file_menu.actionGeometry(new_action)
pause = 1000 # milliseconds
# Assert - Precondition
assert not file_menu.isVisible()
# # Act - Menubar
qtbot.wait(pause)
qtbot.mouseMove(menubar, file_rect.center())
qtbot.wait(pause)
qtbot.mouseClick(menubar, QtCore.Qt.LeftButton, pos=file_rect.center())
qtbot.wait(pause)
qtbot.mouseMove(file_menu, new_rect.center())
qtbot.wait(pause)
qtbot.mouseMove(window, window.rect().center())
qtbot.wait(pause)
qtbot.mouseClick(window, QtCore.Qt.LeftButton, pos=window.rect().center())
qtbot.wait(pause)
# Assert - Precondition
assert not file_menu.isVisible()
```
### Result

|
open
|
2022-06-07T23:58:00Z
|
2022-06-08T16:30:56Z
|
https://github.com/pytest-dev/pytest-qt/issues/430
|
[] |
adam-grant-hendry
| 3
|
holoviz/panel
|
jupyter
| 7,605
|
Several API reference docs are empty / missing content
|
Examples:
https://panel.holoviz.org/api/panel.param.html
https://panel.holoviz.org/api/panel.pipeline.html
This page has headers, but no content, and a bunch of links to widgets in the right-hand sidebar, that do not error, but also don't lead to anywhere on the page.
Similar issue on this page:
https://panel.holoviz.org/api/panel.pane.html#
|
closed
|
2025-01-08T06:55:49Z
|
2025-01-23T17:02:06Z
|
https://github.com/holoviz/panel/issues/7605
|
[
"type: docs"
] |
Coderambling
| 2
|
indico/indico
|
sqlalchemy
| 6,795
|
Multiple participant lists showing in 'lecture' after changing type of event
|
**Describe the bug**
In type 'lecture' I can not create several participants lists, in conference I can (e.g. participants list and waiting list). If I have two participant list in 'conference' and I change the type to 'lecture', then the second participant list is still display on the public event page, howver, in the backend, I can not manage this list and I have to switch back to type 'conference'.
**To Reproduce**
Steps to reproduce the behavior:
1. Set event type to conference
2. Create and acitvate 2 registration forms
3. Set event type to lecture
4. Display view public event page
**Expected behavior**
- a: In type 'lecture' a second participant list is not displayed on the public page or
- b: I can manage the second participants list also in type 'lecture'
**Screenshots**

**Additional context**
Indico 3.3.5
|
open
|
2025-03-11T08:53:21Z
|
2025-03-17T14:21:41Z
|
https://github.com/indico/indico/issues/6795
|
[
"enhancement"
] |
geyslein
| 2
|
PaddlePaddle/ERNIE
|
nlp
| 503
|
生成中文词向量
|
你好,ERNIE模型能够生成中文的词级别向量,而不是生成字级别向量么?
|
closed
|
2020-06-22T03:10:31Z
|
2020-09-08T05:20:32Z
|
https://github.com/PaddlePaddle/ERNIE/issues/503
|
[
"wontfix"
] |
hvuehu
| 4
|
tensorpack/tensorpack
|
tensorflow
| 1,335
|
Training stuck for faster rcnn on coco dataset for only person class.
|
I want to train faster rcnn on coco data using pretrained "COCO-MaskRCNN-R50FPN2x.npz" weights for person class only.
I am facing issue of training getting stuck. Please see the image below.


Also the GPU utilization is zero.
I have added the ignore_mismatch=True while reading the weights to ignore the loading last fc layer because the pretrained weights have 80 classes while I want to train only for 2.(BG,Person)
|
closed
|
2019-10-05T17:10:38Z
|
2019-10-06T18:38:34Z
|
https://github.com/tensorpack/tensorpack/issues/1335
|
[
"duplicate"
] |
dvlshah
| 4
|
developmentseed/lonboard
|
jupyter
| 8
|
`Map.save` for offline (outside of Python) use
|
- Serialize the Arrow table to a base64 string
- Create an HTML file with data, layer parameters, etc
|
closed
|
2023-09-29T18:11:09Z
|
2023-11-03T21:48:51Z
|
https://github.com/developmentseed/lonboard/issues/8
|
[
"javascript"
] |
kylebarron
| 1
|
liangliangyy/DjangoBlog
|
django
| 130
|
缓存文件404
|
缓存配置:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
'TIMEOUT': 10800,
'LOCATION': 'unique-snowflake',
}
}
错误如下:
[22/Jun/2018 22:05:14] "GET /static/CACHE/css/c9a7d7e6abbc.css HTTP/1.1" 404 2361
[22/Jun/2018 22:05:14] "GET /static/CACHE/css/2490f7c132d2.css HTTP/1.1" 404 2361
[22/Jun/2018 22:05:14] "GET /static/CACHE/js/c71e77581f2f.js HTTP/1.1" 404 2361
|
closed
|
2018-06-22T14:08:04Z
|
2018-06-22T14:12:50Z
|
https://github.com/liangliangyy/DjangoBlog/issues/130
|
[] |
jsuyanyong
| 1
|
aiortc/aioquic
|
asyncio
| 422
|
pip install -e . error
|
I force the following error when i run the `pip install -e .`
```
Installing build dependencies ... done
Checking if build backend supports build_editable ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Installing collected packages: UNKNOWN
Attempting uninstall: UNKNOWN
Found existing installation: UNKNOWN 0.0.0
Uninstalling UNKNOWN-0.0.0:
Successfully uninstalled UNKNOWN-0.0.0
Running setup.py develop for UNKNOWN
error: subprocess-exited-with-error
× python setup.py develop did not run successfully.
│ exit code: 1
╰─> [10 lines of output]
running develop
/usr/lib/python3/dist-packages/setuptools/command/easy_install.py:158: EasyInstallDeprecationWarning: easy_install command is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
/usr/lib/python3/dist-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
running egg_info
writing manifest file 'UNKNOWN.egg-info/SOURCES.txt'
running build_ext
copying build/lib.linux-x86_64-3.10/aioquic/_buffer.abi3.so -> aioquic
error: could not create 'aioquic/_buffer.abi3.so': No such file or directory
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
WARNING: No metadata found in /usr/local/lib/python3.10/dist-packages/UNKNOWN-0.0.0-py3.10-linux-x86_64.egg
Rolling back uninstall of unknown
Moving to /usr/local/lib/python3.10/dist-packages/UNKNOWN-0.0.0-py3.10-linux-x86_64.egg
from /usr/local/lib/python3.10/dist-packages/~NKNOWN-0.0.0-py3.10-linux-x86_64.egg
error: subprocess-exited-with-error
× python setup.py develop did not run successfully.
│ exit code: 1
╰─> [10 lines of output]
running develop
/usr/lib/python3/dist-packages/setuptools/command/easy_install.py:158: EasyInstallDeprecationWarning: easy_install command is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
/usr/lib/python3/dist-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
running egg_info
writing manifest file 'UNKNOWN.egg-info/SOURCES.txt'
running build_ext
copying build/lib.linux-x86_64-3.10/aioquic/_buffer.abi3.so -> aioquic
error: could not create 'aioquic/_buffer.abi3.so': No such file or directory
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
```
|
closed
|
2023-11-24T18:48:42Z
|
2023-11-30T16:00:36Z
|
https://github.com/aiortc/aioquic/issues/422
|
[] |
EsmaeiliSina
| 1
|
huggingface/transformers
|
pytorch
| 36,516
|
Object detection tutorial uses buggy dataset, may lead to crash during training
|
### System Info
The [Object detection tutorial](https://huggingface.co/docs/datasets/object_detection) uses the [CPPE-5 dataset](https://huggingface.co/datasets/rishitdagli/cppe-5) to finetune a DETR model. This dataset contains multiple images with wrong annotations. This is clear when inspecting the [CPPE-5-v2 dataset](https://huggingface.co/datasets/danelcsb/cppe-5-v2), which removed 48 wrongly annotated images from the original CPPE-5 dataset. One obvious example is `image_id` 762, which is declared as 1200 × 1200 pixels while being obviously rectangular (the image is in fact 2246 × 1498 pixels). As you can see, displaying the bounding boxes on this image using the code from the tutorial gives nonsensical results:

Even more problematic, when preprocessing some images in this dataset with Albumentations, this can trigger exceptions that are uncaught by the current code. For example, consider `image_id` 83. It is annotated as a
2481 × 1478, despite being in fact 1000 × 600 pixels, and its objects are as follow.
```json
{
"id": [ 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673 ],
"area": [ 24705, 14155, 21672, 13065, 20150, 15990, 18260, 12348, 17554, 16092, 14740, 19890, 67554, 53118, 46125, 56376, 839188, 624780, 658000, 679782 ],
"bbox": [ [ 212, 392, 135, 183 ], [ 564, 866, 95, 149 ], [ 703, 421, 126, 172 ], [ 1019, 840, 65, 201 ], [ 1187, 458, 130, 155 ], [ 1505, 880, 78, 205 ], [ 1568, 542, 110, 166 ], [ 1915, 1016, 98, 126 ], [ 305, 229, 134, 131 ], [ 805, 320, 149, 108 ], [ 1302, 324, 134, 110 ], [ 1744, 360, 170, 117 ], [ 248, 122, 243, 278 ], [ 761, 217, 227, 234 ], [ 1255, 266, 225, 205 ], [ 1714, 277, 232, 243 ], [ 71, 84, 602, 1394 ], [ 624, 143, 468, 1335 ], [ 1095, 162, 500, 1316 ], [ 1516, 205, 534, 1273 ] ],
"category": [ 2, 2, 2, 2, 2, 2, 2, 2, 4, 4, 4, 4, 1, 1, 1, 1, 0, 0, 0, 0 ]
}
```
The bounding box `[1505, 880, 78, 205]` is entirely outside the image. The way the code runs on the two machines I've tested, this is normalized by albumentations (which apparently uses the actual dimensions of the image rather than the dataset annotations) into a bounding box of [1., 1., 1., 1.], which then triggers an `Error: x_max is less than or equal to x_min for bbox [1. 1. 1. 1. 2.]`. I'm not sure why this happens despite passing `clip=True, min_area=25` as arguments to `albumentations.Compose()`, but given that the error clearly comes from an erroneous dataset, I don't care too much. In total, there are 12 bounding boxes that trigger similar errors for `image_id` 83 alone, and `image_id`s 97 and 702 also trigger similar errors.
I'm not sure why this breaks down now and hasn't been caught before. It might be something of the particular combo of OS and library versions I'm using. I'm happy to provide more details into those if needed, but the issue fundamentally comes from the dataset itself, and it seems unwise to have the tutorial silently rely on ignoring erroneous data. Similarly, I didn't include code snippets that will demonstrate this behavior, but I'm happy to do so on request.
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the code as given on [[https://huggingface.co/docs/transformers/tasks/object_detection]] up to model training.
Alternatively, after the line `cppe5["train"] = cppe5["train"].with_transform(train_transform_batch)`, trying to access `cppe5["train"][XXX]` such that we access `image_id` 83, 97, or 702 will trigger similar errors.
### Expected behavior
Training completes without errors.
|
open
|
2025-03-03T15:22:09Z
|
2025-03-04T09:01:12Z
|
https://github.com/huggingface/transformers/issues/36516
|
[
"Examples",
"bug",
"Vision"
] |
LambdaP
| 4
|
ARM-DOE/pyart
|
data-visualization
| 1,406
|
Mention Cfradial Version in read_cfradial
|
Currently, the API documentation of `read_cfradial` does not mention any version number of the Cfradial standard.
Although [Cfradial version 2](https://github.com/NCAR/CfRadial) officially still has draft status, it might be worth to mention which cfradial version is expected by `read_cfradial`, especially since versions 1.x and 2.x are incompatible.
|
closed
|
2023-03-23T15:31:58Z
|
2023-03-29T20:30:12Z
|
https://github.com/ARM-DOE/pyart/issues/1406
|
[
"Docs"
] |
Ockenfuss
| 3
|
Gozargah/Marzban
|
api
| 907
|
arbitrary "path" for json subscription link
|
بالاخره توی نسخه 6.40 برنامه v2rayN امکان اضافه کردن چند کانفیگ کاستوم با استفاده لینک اشتراک اضافه شد. مشکلی که هست اینه که لینک اشتراک json را باید با اضافه کردن پسوند /v2ray - json به انتهای لینک اشتراکی که از پنل میگیریم، بسازیم. اینکار احتمال فیلتر شدن دامنه مورد استفاده برای لینک اشتراک را زیاد میکنه چون عبارت json داخلش هست و فیلترچی راحت متوجه میشه دامنه برای فیلترشکن استفاده میشه.
لطفا این قابلیت را اضافه کنید که برای دریاقت لینک اشتراک json از یک **پسوند دلخواه** استفاده کنیم.
|
closed
|
2024-03-30T22:43:52Z
|
2024-03-31T07:18:55Z
|
https://github.com/Gozargah/Marzban/issues/907
|
[
"Invalid"
] |
farshadl
| 3
|
DistrictDataLabs/yellowbrick
|
matplotlib
| 373
|
Identity Estimator in Contrib
|
Create an Identify estimator to allow yellowbrick to use datasets that already contain predicted probabilities
```
from sklearn.base import BaseEstimator, ClassifierMixin
class Identity(BaseEstimator, ClassifierMixin):
def fit(self, X, y=None):
return self
def predict(self, X):
if X.ndim > 1:
raise ValueError("pass through, provide y_true!")
return X
def predict_proba(self, X):
if X.ndim > 1:
raise ValueError("pass through, provide y_true!")
Xr = X.values.reshape(-1,1)
Xinv = (1.0 - X).values.reshape(-1,1)
return np.concatenate([Xr,Xinv],axis=1)
]:
from yellowbrick.classifier import ClassificationReport
X2 = (d['decile_score'] > 5).astype(int)
y2 = d['two_year_recid']
viz2 = ClassificationReport(Identity(),classes=['Doesn\'t reoffend','Re-offends'])
viz2.fit(X2,y2)
viz2.score(X2,y2)
viz2.poof()
```
This concept was raised in a lab presentation by @ccjolley and take from this notebook:
https://github.com/ccjolley/yb-COMPAS/blob/master/COMPAS%20v2.ipynb
|
open
|
2018-03-28T23:51:40Z
|
2018-04-04T19:06:37Z
|
https://github.com/DistrictDataLabs/yellowbrick/issues/373
|
[
"type: feature",
"type: contrib"
] |
ndanielsen
| 0
|
ranaroussi/yfinance
|
pandas
| 1,992
|
[Volume] FutureWarning: Setting an item of incompatible dtype is deprecated and will raise an error in a future version of pandas.
|
### Describe bug
The code can still work, but it shows warning. I used `yf.download()`, it happens more often if the interval is set to `1d`.
### Simple code that reproduces your problem
```python
def fetch_stock_data(tickers: List[str]) -> pd.DataFrame:
"""Fetch data of stock list using yfinance"""
ticker_list = []
for ticker in tickers:
df = yf.download(
ticker,
interval="1wk",
start="2020-01-01",
repair=True,
session=session,
)
df["Ticker"] = ticker
ticker_list.append(df)
df = pd.concat(ticker_list)
df.reset_index(inplace=True)
return df
```
### Debug log
FutureWarning: Setting an item of incompatible dtype is deprecated and will raise an error in a future version of pandas. Value '[2.10625500e+07 1.67656485e+07 1.34876505e+07 1.91036985e+07
1.26580980e+07 2.04354495e+07 4.59354495e+07 2.04892005e+07
1.96474005e+07 1.06201005e+07 2.06118510e+07 2.38032990e+07
2.66502510e+07 3.82521000e+07 2.42962005e+07 1.75762005e+07
2.26345995e+07 2.45272995e+07 2.20944990e+07 2.21245515e+07
1.53963495e+07 1.66012500e+07 1.30546485e+07 2.24671500e+07
2.24748510e+07 1.81781520e+07 3.77269995e+07 2.00704995e+07
2.05397505e+07 1.92138000e+07 1.45547010e+07 1.88683500e+07
2.93336505e+07 2.48687505e+07 2.60209500e+07 1.76433510e+07
4.97940000e+07 2.67416490e+07 4.11957990e+07 6.77644005e+07
3.63142995e+07 2.85641010e+07 2.41101510e+07 3.59699490e+07
4.66023510e+07 1.34788602e+08 7.43315505e+07 2.90594490e+07
1.69724505e+07 1.22073990e+07 1.75332000e+07 2.39837490e+07
2.08520985e+07 1.79749500e+07 1.30892985e+07 2.40854505e+07
1.46409510e+07 1.74895500e+07 2.80684995e+07 2.67267495e+07
6.70891500e+07 1.44740490e+07 1.57225020e+07 3.29243985e+07
4.40091000e+07 7.04274990e+07 2.26439505e+07 2.02188000e+07
2.65490490e+07 1.70314500e+07 1.99385505e+07 1.75617495e+07
1.07474505e+07 9.58785150e+06 8.79180150e+06 8.72089950e+06
9.78925050e+06 1.28023500e+07 2.11819995e+07 1.25842020e+07
1.08182010e+07 1.12408005e+07 1.11389505e+07 1.49461005e+07
1.92435495e+07 1.72294500e+07 1.62970500e+07 1.30996005e+07
2.47017000e+07 2.27771010e+07 2.64075495e+07 3.45337980e+07
2.08153500e+07 9.64749900e+06 1.80844500e+07 1.62936510e+07
2.01943500e+07 2.01506985e+07 1.74450510e+07 2.05086990e+07
1.99185510e+07 1.18534995e+07 1.71190005e+07 2.08334505e+07
2.63137005e+07 1.67209995e+07 1.10944995e+07 1.80421005e+07
1.31673495e+07 5.68794900e+06 1.71547500e+07 1.70215995e+07
1.44353490e+07 1.15281495e+07 1.03399500e+07 9.25294950e+06
1.27330500e+07 2.78996505e+07 1.23022005e+07 1.18151490e+07
1.04793510e+07 1.53019005e+07 1.17685500e+07 1.38442500e+07
1.43783490e+07 9.61495050e+06 9.68740050e+06 1.27498005e+07
1.30861500e+07 1.37829510e+07 2.48670510e+07 1.52029500e+07
2.00702505e+07 3.11576490e+07 1.09909515e+07 1.12777995e+07
1.77999510e+07 1.03131495e+07 1.19172510e+07 8.99750100e+06
1.57977510e+07 1.74415500e+07 1.71862005e+07 2.22804510e+07
2.48519490e+07 1.28615985e+07 8.97985050e+06 5.81940000e+06
8.87909850e+06 1.41810990e+07 1.00131495e+07 1.29507990e+07
1.62798495e+07 1.61027505e+07 7.68529950e+06 9.64315200e+06
9.96745050e+06 6.41449950e+06 6.03065100e+06 9.69649950e+06
4.84925100e+06 5.90010150e+06 1.19467515e+07 8.69184900e+06
8.56419900e+06 9.26245050e+06 1.07934495e+07 1.07198520e+07
1.73134500e+07 1.00555500e+07 9.74299950e+06 1.03786515e+07
1.15933005e+07 1.06112505e+07 7.96140000e+06 1.03999995e+07
8.41255050e+06 9.77564850e+06 1.12130490e+07 1.01074995e+07
7.18449900e+06 6.75580050e+06 1.00755480e+07 6.85830000e+06
8.58105150e+06 1.08989505e+07 6.68255100e+06 9.43984950e+06
1.16806485e+07 1.44384015e+07 8.86635000e+06 8.25940050e+06
6.67435050e+06 1.02404985e+07 1.02786000e+07 9.32115150e+06
7.34284800e+06 9.19180050e+06 6.21079950e+06 1.30910490e+07
1.65008490e+07 1.57128510e+07 1.87312485e+07 1.93867980e+07
2.24612520e+07 1.28666985e+07 5.65314900e+06 9.18955050e+06
1.64545515e+07 1.14418500e+07 6.39709950e+06 8.57429850e+06
1.04579985e+07]' has dtype incompatible with int64, please explicitly cast to a compatible dtype first.
df2.loc[f_open_xor_closed_fixed, "Volume"] *= 0.5 * m_rcp
### Bad data proof
_No response_
### `yfinance` version
0.2.40
### Python version
3.12.2
### Operating system
Ubuntu noble 24.04 x86_64
|
closed
|
2024-07-19T07:16:43Z
|
2024-07-19T08:03:19Z
|
https://github.com/ranaroussi/yfinance/issues/1992
|
[] |
makamto
| 1
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
pytorch
| 1,508
|
Question on semantic loss of CycleGANSemanticModel
|
Hi, I got a bit confused about the implementation of the semantic loss in the CycleGANSemanticModel. In your CyCADA paper, the semantic consistency loss is computed using a pretrained model fs. However, in this code I found that the semantic loss is directly computed using the target model ft. I just wonder why the implementation differs and how will this influence the result.
|
closed
|
2022-11-16T10:28:56Z
|
2022-11-17T05:48:03Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1508
|
[] |
yklInverted
| 0
|
JaidedAI/EasyOCR
|
machine-learning
| 1,246
|
Rely on a common/proven/maintained models retrieval logic
|
Every new OCR solution seems to rely on its own set of model. EasyOCR, DocTR, OpenMMLab's MMOCR, ...
It's even worst when model are not retrieved from their upstream/official location leading to all sort of question about performances, training, ... (Dozens of issues in the project bugtracker about dbnet18, dbnet50, custom model, ...)
MMOCR seems to provide **many** models (and a clear list) https://mmocr.readthedocs.io/en/dev-1.x/modelzoo.html
Don't you think all the zip/download/config could be removed/unified so that model list/choice/selection is abstracted instead of being repeated with as many hardcoded-list as there are Python OCR projects?
The immediate benefit is that one can keep its usual codebase / library and switch/compare models with little to no changes involved.
|
open
|
2024-05-03T23:29:53Z
|
2024-05-03T23:29:53Z
|
https://github.com/JaidedAI/EasyOCR/issues/1246
|
[] |
drzraf
| 0
|
15r10nk/inline-snapshot
|
pytest
| 98
|
Broken "pytest integration" documentation page
|
All python code blocks are failing in https://15r10nk.github.io/inline-snapshot/pytest/
Error messages:
- `environment: line 10: pytest: command not found`
- `/usr/local/bin/python: No module named pytest`
|
closed
|
2024-07-13T20:59:34Z
|
2024-07-17T12:59:18Z
|
https://github.com/15r10nk/inline-snapshot/issues/98
|
[] |
tmlmt
| 2
|
AutoGPTQ/AutoGPTQ
|
nlp
| 715
|
[BUG] The paths for the custom_bwd and custom_fwd methods have changed
|
**Describe the bug**
The paths for the custom_bwd and custom_fwd methods have changed, and their arguments have been updated.
Before
```
from torch.cuda.amp import custom_bwd, custom_fwd
```
After (need to change)
```
from torch.amp import custom_bwd, custom_fwd
```
**Hardware details**
CPU: ???
GPU: RTX 4090
**Software version**
OS: Linux
Python: 3.10
auto-gptq: 0.7.1
pytorch: 2.4.0
transforemrs: 4.42.4
accelerate: 0.33.0
triton: 3.0.0
**To Reproduce**
I am currently using the following imported modules in my project, but an error occurs.
```
import os
import gc
import csv
import json
import copy
import time
import random
import argparse
import torch
import numpy as np
from LLMPruner.peft import PeftModel
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoTokenizer,
LlamaForCausalLM,
LlamaTokenizer,
)
from awq import AutoAWQForCausalLM
from auto_gptq import (
BaseQuantizeConfig,
AutoGPTQForCausalLM
)
```
The following error occurs, and it occurs at line 411 in auto_gptq/nn_modules/triton_utils/kernels.py. The method torch.cuda.amp.custom_fwd has been removed. Therefore, it needs to be changed to torch.amp.custom_fwd.
```
packages/auto_gptq/nn_modules/triton_utils/kernels.py:411: FutureWarning: torch.cuda.amp.custom_fwd(args...) is deprecated. Please use torch.amp.custom_fwd(args..., device_type='cuda') instead.
def forward(ctx, input, qweight, scales, qzeros, g_idx, bits, maxq):
/usr/local/lib/python3.10/dist-packages/auto_gptq/nn_modules/triton_utils/kernels.py:419: FutureWarning: torch.cuda.amp.custom_bwd(args...) is deprecated. Please use torch.amp.custom_bwd(args..., device_type='cuda') instead.
def backward(ctx, grad_output):
/usr/local/lib/python3.10/dist-packages/auto_gptq/nn_modules/triton_utils/kernels.py:461: FutureWarning: torch.cuda.amp.custom_fwd(args...) is deprecated. Please use torch.amp.custom_fwd(args..., device_type='cuda') instead.
@custom_fwd(cast_inputs=torch.float16)
...
...
```
**Expected behavior**
The module import paths need to be changed, and the input arguments need to be modified to ensure proper functionality.
**Screenshots**


**Additional context**
I have currently submitted a Pull Request and would like to request a review. What further modifications are needed?
https://github.com/AutoGPTQ/AutoGPTQ/pull/714
|
open
|
2024-07-26T17:30:22Z
|
2024-07-28T14:22:23Z
|
https://github.com/AutoGPTQ/AutoGPTQ/issues/715
|
[
"bug"
] |
russellgeum
| 0
|
google-research/bert
|
tensorflow
| 1,066
|
BERT-Tiny,BERT-Mini,BERT-Small,BERT-Medium - TF 2.0 checkpoints
|
Hi All ,
I am looking at BERT checkpoint here - https://github.com/tensorflow/models/tree/master/official/nlp/bert for TF 2.0 .
Are checkpoints for BERT-Tiny,BERT-Mini,BERT-Small,BERT-Medium avaialbe in TF 2.0 ?
|
closed
|
2020-04-20T17:42:37Z
|
2020-08-14T19:17:55Z
|
https://github.com/google-research/bert/issues/1066
|
[] |
17patelumang
| 2
|
Yorko/mlcourse.ai
|
numpy
| 743
|
AI learning
|
closed
|
2023-04-23T07:20:16Z
|
2023-05-03T09:07:35Z
|
https://github.com/Yorko/mlcourse.ai/issues/743
|
[
"invalid"
] |
manishikuma
| 0
|
|
alpacahq/alpaca-trade-api-python
|
rest-api
| 318
|
Erroneous historical bars - AAPL
|
I was using the data API to retrieve historical data (YTD) for AAPL, but the returned values look wrong.
Here is the code snippet:
```python
import pandas as pd
import alpaca_trade_api as tradeapi
api = tradeapi.REST()
bars = api.get_barset(
symbols="AAPL",
timeframe="day",
start="2020-01-01T0:00:00-04:00"
)
# reformat the output input a pd.DataFrame
bars = [
dict(
bar._raw,
symbol=symbol,
t=bar.t,
)
for symbol, symbol_bars in bars.items()
for bar in symbol_bars
]
df = pd.DataFrame.from_records(bars)
```
`df["c"].max()` gives `506.19`
When plotting the close prices, we have
```python
df.plot(x='t', y='c')
```

Am I missing something here? Any idea on what this can be due to?
Note that directly sending an HTTP request to the data endpoint `https://data.alpaca.markets/v1` returned the same values.
--
For reference, the YTD close prices for AAPL in yfinance:

|
closed
|
2020-10-25T07:51:55Z
|
2021-08-11T08:26:58Z
|
https://github.com/alpacahq/alpaca-trade-api-python/issues/318
|
[] |
syltruong
| 5
|
schemathesis/schemathesis
|
pytest
| 2,371
|
[FEATURE] Filling missing examples for basic data types
|
## Description of Problem
`--contrib-openapi-fill-missing-examples` not working as expected along with `--hypothesis-phases=explicit`
let us say i have a field `name`, and i use `--hypothesis-phases=explicit` and `--contrib-openapi-fill-missing-examples`
schemathesis generates `name: ''` which is not valid json
type of name is `string`
also, i have noticed that it will generate empty string for the name field only if it is in the `required` list, otherwise it skips it altogether
issue with both swagger 2.0 and openapi 3.0
## Steps to Reproduce
1. take attached schema
[tree-openapi30.txt](https://github.com/user-attachments/files/16366400/tree-openapi30.txt)
2. change extension from txt to yaml (github not allowing yaml upload)
3. run `st run -v --hypothesis-phases=explicit --contrib-openapi-fill-missing-examples --validate-schema=true --hypothesis-verbosity=debug tree-openapi30.yaml --dry-run`
after testing, i notice the missing examples being filled only when a pattern is specified
for the year, i see `0000` if pattern given is `^\d{4}`
for basic schema types like https://swagger.io/docs/specification/data-models/data-types/
expecting examples to be filled, for examples :-
`string`: abcd ( or consider random from [a-zA-Z0-9]+)
`integer`: 1 ( or consider random from range 1-100)
`number`: 1.0 ( or consider random with single decimal)
`boolean`: random between true or false
## Workaround
workaround would be to specify patterns everywhere
if this was intended, it was missing in the documentation!
## Version
st, version 3.33.1
## Expectation
expecting generated values to be filled for default data types if explicit examples are missing irrespective of required list.
For example, for a field `name` of `type: string`, `name: 'qwrgte3q5w'` is generated irrespective of whether `name` is `required`.
|
open
|
2024-07-24T19:07:25Z
|
2024-07-28T09:08:55Z
|
https://github.com/schemathesis/schemathesis/issues/2371
|
[
"Type: Feature"
] |
ravy
| 1
|
widgetti/solara
|
flask
| 629
|
Pyodide & Solara Integration?
|
I want to add Pyodide to Solara so the clients can run Python code in the browser.
Having a look over Solara code base i noticed this line:
`render_kwargs={"for_pyodide": True}`
What does it do? I found it [here](https://github.com/widgetti/solara/blob/cbf51712885642102b9dd6f3e48d1f60069f0413/solara/__main__.py#L620) and was curious to know if there is a simpler way than adding it manually, which brings me onto my next question.
I can see in Solara's documentation that I can have a head tag but I can't use a script tag, is this correct? If so how would I add Pyodide from a CDN?
I have setup the Assets & Public directories but I'm not sure how to use the <script> tag to load Pyodide.
|
open
|
2024-05-02T19:25:52Z
|
2024-05-08T15:11:18Z
|
https://github.com/widgetti/solara/issues/629
|
[] |
SuperPauly
| 1
|
seleniumbase/SeleniumBase
|
web-scraping
| 2,432
|
can use chrome profile
|
can i use chrome profile path
|
closed
|
2024-01-15T12:08:43Z
|
2024-01-15T16:17:52Z
|
https://github.com/seleniumbase/SeleniumBase/issues/2432
|
[
"duplicate",
"question"
] |
raymomo
| 1
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
deep-learning
| 1,230
|
Can we train cycle GAN using paired dataset (Aligned dataset)
|
Sir , if we have a aligned dataset , can cycleGAN also show supervised learning behaviour ?
I mean can we add L1 loss between ground truth image (domain B) and generated image (domain B).
That is in simple words is it possible to include pix2pix kind of L1 loss using aligned dataset into CycleGAN for better results?
and is there any additional benefit of using aligned dataset to Cycle Gan while training ?
|
open
|
2021-01-22T18:26:27Z
|
2021-02-03T08:04:56Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1230
|
[] |
shivom9713
| 2
|
mlfoundations/open_clip
|
computer-vision
| 435
|
Train Model With Multiple Input Images
|
Is it possible to change the model to accept more than one image as the input?
If I'm not mistaken, CLIP takes an image and a text as the inputs, extracts the features of these two inputs and finally gives us the logits of the distance of the image to the text.
So, is it possible to give two (or more) input images and extract ONE feature from the input images (just like before)?
I want to somehow mix the two inputs. For example, inputting an image alongside it's semantic segmentation as the input to the model. If it's possible, what parts of the code should I change? Or is this already implemented and usable?
Thanks.
|
open
|
2023-02-18T10:42:05Z
|
2023-02-18T22:01:06Z
|
https://github.com/mlfoundations/open_clip/issues/435
|
[] |
Neltherion
| 3
|
httpie/cli
|
rest-api
| 653
|
How to get native terminal colors?
|
This used to work and now it's using its own set of really ugly colors (I've tried the different themes). I was under the impression `native` was supposed to use native terminal colors.
Thanks!
|
closed
|
2018-02-17T14:26:42Z
|
2018-08-24T18:52:50Z
|
https://github.com/httpie/cli/issues/653
|
[] |
9mm
| 16
|
biolab/orange3
|
numpy
| 7,050
|
Widget documentation
|
In https://github.com/biolab/orange3-doc-visual-programming/pull/2, @ajdapretnar updated screenshots and some text, following similar contributions by @borondics. This was/is needed and long overdue, but documentation has other issues as well.
1. It looks awful, in particular because of image scaling: some retina images appear twice the size (or perhaps they are resized to the full width of the column), and some are too wide and squeezed into the column. As consequence, a page for a single widget is a hodgepodge of screenshots in different resolutions, most of them too big or too small.
I think we should use a style in which the widths of images do not depend on the widths of columns, except for very wide images that would be shown downscaled and expanded on click. We should generally avoid those.
2. I dislike the general layout (see below). It's just ... ugly, and in part the cause of the problems with images. We can host the docs on github pages and use whatever style we wish.
3. Terminology: we should look for continuous and discrete, and replace it with numeric and categorical. The term *domain* is also meaningless outside the 80s and 90s ML. *Scheme* is now a *workflow*.
4. Style: remove all occurrences of "you", "your" etc., like in *Select the variables you wish to see plotted. Optimize your projection with *, which should be "Select the variables to plot. Optimize the projection with ...". (This is mostly my fault, but I was half my age then...).

|
open
|
2025-03-15T10:52:31Z
|
2025-03-15T12:38:37Z
|
https://github.com/biolab/orange3/issues/7050
|
[
"needs discussion"
] |
janezd
| 1
|
modin-project/modin
|
pandas
| 7,171
|
BUG: ValueError: Length of values (3) does not match length of index (3001)
|
### Modin version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the latest released version of Modin.
- [X] I have confirmed this bug exists on the main branch of Modin. (In order to do this you can follow [this guide](https://modin.readthedocs.io/en/stable/getting_started/installation.html#installing-from-the-github-master-branch).)
### Reproducible Example
```python
import modin.pandas as pd
data_dict = {
"country": ["Delhi", "Mumbai", "Kolkata"] * 1000,
"annual tax collected": [19294482072, 28916155672, 24112550372] * 1000,
"happiness_index": [9.94, 7.16, 6.35] * 1000,
}
df = pd.DataFrame(data_dict)
df.loc[len(df)] = ['Noida', None, None]
# ValueError: Length of values (3) does not match length of index (3001)
```
### Issue Description
Failed to insert a new row.
### Expected Behavior
Match pandas.
### Error Logs
<details>
```python-traceback
Replace this line with the error backtrace (if applicable).
```
</details>
### Installed Versions
<details>
Replace this line with the output of pd.show_versions()
</details>
|
closed
|
2024-04-11T08:28:50Z
|
2024-04-11T08:35:47Z
|
https://github.com/modin-project/modin/issues/7171
|
[
"bug 🦗",
"pandas concordance 🐼"
] |
YarShev
| 1
|
RomelTorres/alpha_vantage
|
pandas
| 253
|
intraday data returned does not include current day
|
The `get_intraday` function returns data for the past 7 days, not including the current day in which the code is executed. Could we add functionality to retrieve the same data for the ongoing day up to the moment the function is called or are there other functions that already complete this action?
|
closed
|
2020-08-28T17:18:56Z
|
2020-08-28T17:23:35Z
|
https://github.com/RomelTorres/alpha_vantage/issues/253
|
[] |
sher85
| 1
|
sktime/pytorch-forecasting
|
pandas
| 1,385
|
[BUG] Unable to install with pip
|
- PyTorch-Forecasting version:
- PyTorch version: 2.0.1
- Python version: 3.11
- Operating System: Windows 11
### Expected behavior
I executed code `pip install pytorch-forecasting` to install the library
### Actual behavior
However, result was
```
Collecting pytorch-forecasting
Using cached pytorch_forecasting-0.10.1-py3-none-any.whl (127 kB)
Requirement already satisfied: matplotlib in c:\users\arasyidi\appdata\local\miniconda3\envs\forecast_test\lib\site-packages (from pytorch-forecasting) (3.8.0)
Collecting optuna<3.0.0,>=2.3.0 (from pytorch-forecasting)
Using cached optuna-2.10.1-py3-none-any.whl (308 kB)
Collecting pandas<2.0.0,>=1.3.0 (from pytorch-forecasting)
Using cached pandas-1.5.3-cp311-cp311-win_amd64.whl (10.3 MB)
Collecting pytorch-lightning<2.0.0,>=1.2.4 (from pytorch-forecasting)
Using cached pytorch_lightning-1.9.5-py3-none-any.whl (829 kB)
Collecting scikit-learn<1.1,>=0.24 (from pytorch-forecasting)
Using cached scikit-learn-1.0.2.tar.gz (6.7 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... error
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [66 lines of output]
Partial import of sklearn during the build process.
setup.py:128: DeprecationWarning:
`numpy.distutils` is deprecated since NumPy 1.23.0, as a result
of the deprecation of `distutils` itself. It will be removed for
Python >= 3.12. For older Python versions it will remain present.
It is recommended to use `setuptools < 60.0` for those Python versions.
For more details, see:
https://numpy.org/devdocs/reference/distutils_status_migration.html
from numpy.distutils.command.build_ext import build_ext # noqa
INFO: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils
Traceback (most recent call last):
File "C:\Users\arasyidi\AppData\Local\miniconda3\envs\forecast_test\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module>
main()
File "C:\Users\arasyidi\AppData\Local\miniconda3\envs\forecast_test\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\arasyidi\AppData\Local\miniconda3\envs\forecast_test\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 149, in prepare_metadata_for_build_wheel
return hook(metadata_directory, config_settings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\arasyidi\AppData\Local\Temp\pip-build-env-645yivs5\overlay\Lib\site-packages\setuptools\build_meta.py", line 174, in prepare_metadata_for_build_wheel
self.run_setup()
File "C:\Users\arasyidi\AppData\Local\Temp\pip-build-env-645yivs5\overlay\Lib\site-packages\setuptools\build_meta.py", line 268, in run_setup
self).run_setup(setup_script=setup_script)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\arasyidi\AppData\Local\Temp\pip-build-env-645yivs5\overlay\Lib\site-packages\setuptools\build_meta.py", line 158, in run_setup
exec(compile(code, __file__, 'exec'), locals())
File "setup.py", line 319, in <module>
setup_package()
File "setup.py", line 315, in setup_package
setup(**metadata)
File "C:\Users\arasyidi\AppData\Local\Temp\pip-build-env-645yivs5\overlay\Lib\site-packages\numpy\distutils\core.py", line 135, in setup
config = configuration()
^^^^^^^^^^^^^^^
File "setup.py", line 201, in configuration
config.add_subpackage("sklearn")
File "C:\Users\arasyidi\AppData\Local\Temp\pip-build-env-645yivs5\overlay\Lib\site-packages\numpy\distutils\misc_util.py", line 1050, in add_subpackage
config_list = self.get_subpackage(subpackage_name, subpackage_path,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\arasyidi\AppData\Local\Temp\pip-build-env-645yivs5\overlay\Lib\site-packages\numpy\distutils\misc_util.py", line 1016, in get_subpackage
config = self._get_configuration_from_setup_py(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\arasyidi\AppData\Local\Temp\pip-build-env-645yivs5\overlay\Lib\site-packages\numpy\distutils\misc_util.py", line 958, in _get_configuration_from_setup_py
config = setup_module.configuration(*args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\arasyidi\AppData\Local\Temp\pip-install-n8lq38yw\scikit-learn_fad215f4bef24ed397d9efb7423afc45\sklearn\setup.py", line 85, in configuration
cythonize_extensions(top_path, config)
File "C:\Users\arasyidi\AppData\Local\Temp\pip-install-n8lq38yw\scikit-learn_fad215f4bef24ed397d9efb7423afc45\sklearn\_build_utils\__init__.py", line 47, in cythonize_extensions
basic_check_build()
File "C:\Users\arasyidi\AppData\Local\Temp\pip-install-n8lq38yw\scikit-learn_fad215f4bef24ed397d9efb7423afc45\sklearn\_build_utils\pre_build_helpers.py", line 114, in basic_check_build
compile_test_program(code)
File "C:\Users\arasyidi\AppData\Local\Temp\pip-install-n8lq38yw\scikit-learn_fad215f4bef24ed397d9efb7423afc45\sklearn\_build_utils\pre_build_helpers.py", line 70, in compile_test_program
ccompiler.compile(
File "C:\Users\arasyidi\AppData\Local\Temp\pip-build-env-645yivs5\overlay\Lib\site-packages\setuptools\_distutils\_msvccompiler.py", line 327, in compile
self.initialize()
File "C:\Users\arasyidi\AppData\Local\Temp\pip-build-env-645yivs5\overlay\Lib\site-packages\setuptools\_distutils\_msvccompiler.py", line 224, in initialize
vc_env = _get_vc_env(plat_spec)
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\arasyidi\AppData\Local\Temp\pip-build-env-645yivs5\overlay\Lib\site-packages\setuptools\msvc.py", line 316, in msvc14_get_vc_env
return _msvc14_get_vc_env(plat_spec)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\arasyidi\AppData\Local\Temp\pip-build-env-645yivs5\overlay\Lib\site-packages\setuptools\msvc.py", line 270, in _msvc14_get_vc_env
raise distutils.errors.DistutilsPlatformError(
distutils.errors.DistutilsPlatformError: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
```
|
closed
|
2023-09-25T07:44:47Z
|
2024-08-30T15:19:10Z
|
https://github.com/sktime/pytorch-forecasting/issues/1385
|
[
"bug"
] |
amrirasyidi
| 5
|
viewflow/viewflow
|
django
| 280
|
Demo site https
|
http://viewflow.io
http://demo.viewflow.io
This sites doesn't have https in 2020. Had some issues with showing your solution for customer.
|
closed
|
2020-06-19T07:04:42Z
|
2023-02-09T10:17:25Z
|
https://github.com/viewflow/viewflow/issues/280
|
[
"request/enhancement",
"dev/demo"
] |
slavnycoder
| 2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.