repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
pydantic/pydantic
|
pydantic
| 11,590
|
Mypy error when using `@model_validator(mode="after")` and `@final`
|
### Initial Checks
- [x] I confirm that I'm using Pydantic V2
### Description
This is very similar to https://github.com/pydantic/pydantic/issues/6709, but occurs when the model class is decorated with `@typing.final`.
```console
$ mypy --cache-dir=/dev/null --strict bug.py
bug.py:10: error: Cannot infer function type argument [misc]
Found 1 error in 1 file (checked 1 source file)
```
This error seems to occur irrespective of whether the `pydantic.mypy` plugin is configured or not. I wondered if this might be an issue in `mypy`, but if I switch out `@pydantic.model_validator` for another decorator, e.g. `@typing_extensions.deprecated`, then the error isn't raised.
### Example Code
```Python
from __future__ import annotations
from typing import Self, final
import pydantic
@final
class MyModel(pydantic.BaseModel):
@pydantic.model_validator(mode="after")
def my_validator(self) -> Self:
return self
```
### Python, Pydantic & OS Version
```Text
pydantic version: 2.10.6
pydantic-core version: 2.27.2
pydantic-core build: profile=release pgo=false
install path: /home/nick/Sources/kraken-lexicon-gbr/.venv/lib/python3.12/site-packages/pydantic
python version: 3.12.9 (main, Feb 5 2025, 19:10:45) [Clang 19.1.6 ]
platform: Linux-6.13.4-arch1-1-x86_64-with-glibc2.41
related packages: typing_extensions-4.12.2 mypy-1.15.0
commit: unknown
```
|
closed
|
2025-03-20T11:55:22Z
|
2025-03-21T09:32:48Z
|
https://github.com/pydantic/pydantic/issues/11590
|
[
"bug V2",
"pending"
] |
ngnpope
| 1
|
JaidedAI/EasyOCR
|
deep-learning
| 957
|
Unsupport Chinese path of images
|
It is obvious that EasyOCR goes down when it readtext from an image with Chinese path, just because it use python-opencv.
So I add some code peice into utils.py to solve this problem. Now the OCR system run smoothly.
def is_chinese(string):
"""
检查整个字符串是否包含中文
:param string: 需要检查的字符串
:return: bool
参考https://blog.csdn.net/wenqiwenqi123/article/details/122258804
"""
for ch in string:
if u'\u4e00' <= ch <= u'\u9fff':
return True
return True
def reformat_input(image):
if type(image) == str:
if image.startswith('http://') or image.startswith('https://'):
tmp, _ = urlretrieve(image , reporthook=printProgressBar(prefix = 'Progress:', suffix = 'Complete', length = 50))
img_cv_grey = cv2.imread(tmp, cv2.IMREAD_GRAYSCALE)
os.remove(tmp)
else:
if is_chinese(image):
img_cv_grey=cv2.imdecode(np.fromfile(image,dtype=np.uint8),cv2.IMREAD_GRAYSCALE)
else:
img_cv_grey = cv2.imread(image, cv2.IMREAD_GRAYSCALE)
image = os.path.expanduser(image)
img = loadImage(image) # can accept URL
........
|
open
|
2023-03-01T02:31:50Z
|
2023-03-01T02:31:50Z
|
https://github.com/JaidedAI/EasyOCR/issues/957
|
[] |
drrobincroft
| 0
|
onnx/onnx
|
pytorch
| 5,988
|
Version converter: No Adapter From Version 16 for Identity
|
# Ask a Question
### Question
I meet the following issue while trying to convert the onnx model of Opset16 to Opset15 :
"**adapter_lookup: Assertion `false` failed: No Adapter From Version $16 for Identity**"
If I have to use an Opset15 and currently only have Opset16 version of the relevant onnx resources available, do you have any suggestions?

|
open
|
2024-03-02T13:51:14Z
|
2024-03-06T19:14:23Z
|
https://github.com/onnx/onnx/issues/5988
|
[
"question"
] |
lsp2
| 4
|
joke2k/django-environ
|
django
| 410
|
Default for keys in dictionary
|
With dict we can set defaults
```
MYVAR = env.dict(
"MYVAR",
{
'value': bool,
'cast': {
'ACTIVE': bool,
'URL': str,
}
},
default={
'ACTIVE': False,
'URL': "http://example.com",
})
```
now, the default are used IFF the entry `MYVAR` is not present in the `.env` file.
if i've an entry of this type `MYVAR=ACTIVE:True;` the result is a dictionary that is `{"ACTIVE":"True"}`
wouldn't be better to apply the default for all the keys that are not specified in the env? thus, in my example i would get
`{"ACTIVE":"True", "URL": "http://example.com"}`
|
open
|
2022-07-21T08:59:32Z
|
2024-04-30T17:12:38Z
|
https://github.com/joke2k/django-environ/issues/410
|
[] |
esseti
| 1
|
pennersr/django-allauth
|
django
| 3,914
|
Adding/changing email address with MFA enabled
|
It seems that it's impossible to add or change an email address while having MFA enabled, judging by this commit:
https://github.com/pennersr/django-allauth/pull/3383/commits/7bf4d5e6e4b188b7e0738652f2606b98804374ab
What is the logic behind this? What is the expected flow when a user needs to change an email address?
|
closed
|
2024-06-22T18:05:12Z
|
2024-06-22T19:17:58Z
|
https://github.com/pennersr/django-allauth/issues/3914
|
[] |
eyalch
| 2
|
lukasmasuch/streamlit-pydantic
|
streamlit
| 74
|
List of datatypes or Nested Models Not Displaying Correctly in `pydantic_input` form
|
### Checklist
- [x] I have searched the [existing issues](https://github.com/lukasmasuch/streamlit-pydantic/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
### **Description:**
When using `pydantic_input` with lists (e.g., `List[str]`) or list of models, the form do not render correctly.
This issue occurs despite following the correct usage patterns from the [[official examples](https://st-pydantic.streamlit.app/)], specifically the **Complex Instance Model** example. This behavior is observed for:
- Simple lists like `List[str]`, `List[int]`
- List of Pydantic models
```python
# Nested Pydantic Model
class NestedModel(BaseModel):
id: int = Field(..., description="ID of the nested object")
name: str = Field(..., description="Name of the nested object")
# Main ConfigModel
class ConfigModel(BaseModel):
keys: List[str] = Field(..., description="List of keys used for lookup")
value: str = Field(..., description="Value associated with the keys")
nested_items: List[NestedModel] = Field(..., description="List of nested model instances")
```
The above models give the following form when used with `pydantic_input` method

**Expected Behaviour**

### Reproducible Code Example
```Python
import streamlit as st
from pydantic import BaseModel, Field
from typing import List, Dict, Any
import streamlit_pydantic as sp
# Nested Pydantic Model
class NestedModel(BaseModel):
id: int = Field(..., description="ID of the nested object")
name: str = Field(..., description="Name of the nested object")
# Main ConfigModel
class ConfigModel(BaseModel):
keys: List[str] = Field(..., description="List of keys used for lookup")
value: str = Field(..., description="Value associated with the keys")
nested_items: List[NestedModel] = Field(..., description="List of nested model instances")
# -------------------------
# Streamlit UI
# -------------------------
st.set_page_config(layout="wide")
st.title("ConfigModel Input Form")
# Pre-filled data (Optional)
data = {
"keys": ["id1", "id2"],
"value": "example_value",
"metadata": [{"key": "meta1", "value": 100}, {"key": "meta2", "value": 200}],
"nested_items": [{"id": 1, "name": "Item 1"}, {"id": 2, "name": "Item 2"}]
}
model_instance = ConfigModel(**data)
# Render the form using streamlit_pydantic
with st.form("config_model_form"):
form_result = sp.pydantic_input(key="config_model", model=model_instance)
submit_button = st.form_submit_button("Submit")
# Handle form submission
if submit_button:
if form_result:
try:
validated_data = ConfigModel.model_validate(form_result).dict()
st.success("Form submitted successfully!")
st.json(validated_data, expanded=True)
except Exception as e:
st.error(f"Validation error: {str(e)}")
else:
st.warning("Form submission failed. Please check the inputs.")
```
### Steps To Reproduce
1. Install dependencies from the provided `requirements.txt`.
2. Save the code as `app.py`.
3. Run the code using:
```bash
streamlit run app.py
```
**Dependencies:**
```plaintext
pydantic-core==2.27.2
pydantic==2.10.6
streamlit==1.42.1
streamlit-pydantic==0.6.1-rc.2
pydantic-extra-types==2.10.2
pydantic-settings==2.7.1
```
### Expected Behavior
- The `keys` field should display as a list input with pre-filled values `"id1"`, `"id2"`.
- The `nested_items` field should render forms for each instance of the `NestedModel` within a list form.
### Current Behavior
- The list input does not appear or behaves incorrectly.
- List of models within `nested_items` are not displayed as expected, and editing or adding new instances is not possible.

### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- **OS:** Ubuntu 22.04
- **Python:** 3.9, 3.11
- **Streamlit:** 1.42.1
- **Pydantic:** 2.10.6
- **streamlit-pydantic:** 0.6.1-rc.2, 0.6.1-rc.3
### Additional Information
**Please advise on whether this is a compatibility issue or requires additional configuration.** 😊
|
open
|
2025-02-21T20:00:22Z
|
2025-02-25T13:33:12Z
|
https://github.com/lukasmasuch/streamlit-pydantic/issues/74
|
[
"type:bug",
"status:needs-triage"
] |
gaurav-brandscapes
| 0
|
influxdata/influxdb-client-python
|
jupyter
| 31
|
Add support for /delete metrics endpoint
|
closed
|
2019-11-01T15:09:05Z
|
2019-11-04T09:32:01Z
|
https://github.com/influxdata/influxdb-client-python/issues/31
|
[] |
rhajek
| 0
|
|
pydantic/FastUI
|
pydantic
| 253
|
Demo in cities click "1" page not response.
|
this's component bug?
|
closed
|
2024-03-21T13:13:59Z
|
2024-03-21T13:18:26Z
|
https://github.com/pydantic/FastUI/issues/253
|
[] |
qq727127158
| 1
|
pallets-eco/flask-sqlalchemy
|
flask
| 486
|
Connecting multiple pre-exising databases via binds
|
Hi. I'm new to flask-sqlalchemy. I have multiple pre-existing mysql databases (for this example, two is enough). I have created a minimal example of what I'm trying to do below. I'm able to successful connect to one database using ```SQLALCHEMY_DATABASE_URI```. For the second database, I'm trying to use ```SQLALCHEMY_BINDS``` to connect to the other database, but i am unable to access it. I've been unable to find an example that uses binding with pre-existing databases. Any help would be appreciated.
```
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
app = Flask(__name__)
# db1 - works ok.
app.config['SQLALCHEMY_DATABASE_URI'] = 'mysql://user:password@localhost/SiPMCalibration'
# db2 - not working.
app.config['SQLALCHEMY_BINDS'] = {
'db2': 'mysql://user:password@localhost/Run8Chan'
}
db = SQLAlchemy(app)
db.Model.metadata.reflect(db.engine)
# Setup the model for the pre-existing table.
class sipm_calibration(db.Model):
__table__ = db.Model.metadata.tables['Calibration']
# Some code to retrive the most recent 100 entries and print their headers (i.e keys)
query = db.session.query(sipm_calibration).order_by(sipm_calibration.ID.desc()).limit(100)
print db.session.execute(query).keys() # Works fine.
#---------------------
# All ok so far - try again for the second database via binding.
class run_entry(db.Model):
__bind_key__ = 'db2'
__table__ = db.Model.metadata.tables['Runs']
query = db.session.query(run_entry).order_by(run_entry.ID.desc()).limit(100)
print db.session.execute(query).keys()
```
|
closed
|
2017-03-20T14:19:26Z
|
2020-12-05T20:46:27Z
|
https://github.com/pallets-eco/flask-sqlalchemy/issues/486
|
[] |
daniel-saunders
| 3
|
plotly/dash-table
|
plotly
| 448
|
filtering not working in python or R
|
@rpkyle @Marc-Andre-Rivet Data Table filtering does not work when running the first example of the [docs](https://dash.plot.ly/datatable/interactivity) locally:

The gif above is from running:
```
import dash
from dash.dependencies import Input, Output
import dash_table
import dash_core_components as dcc
import dash_html_components as html
import pandas as pd
df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/gapminder2007.csv')
app = dash.Dash(__name__)
app.layout = html.Div([
dash_table.DataTable(
id='datatable-interactivity',
columns=[
{"name": i, "id": i, "deletable": True} for i in df.columns
],
data=df.to_dict('records'),
editable=True,
filtering=True,
sorting=True,
sorting_type="multi",
row_selectable="multi",
row_deletable=True,
selected_rows=[],
pagination_mode="fe",
pagination_settings={
"current_page": 0,
"page_size": 10,
},
),
html.Div(id='datatable-interactivity-container')
])
@app.callback(
Output('datatable-interactivity-container', "children"),
[Input('datatable-interactivity', "derived_virtual_data"),
Input('datatable-interactivity', "derived_virtual_selected_rows")])
def update_graphs(rows, derived_virtual_selected_rows):
if derived_virtual_selected_rows is None:
derived_virtual_selected_rows = []
dff = df if rows is None else pd.DataFrame(rows)
colors = ['#7FDBFF' if i in derived_virtual_selected_rows else '#0074D9'
for i in range(len(dff))]
return [
dcc.Graph(
id=column,
figure={
"data": [
{
"x": dff["country"],
"y": dff[column],
"type": "bar",
"marker": {"color": colors},
}
],
"layout": {
"xaxis": {"automargin": True},
"yaxis": {
"automargin": True,
"title": {"text": column}
},
"height": 250,
"margin": {"t": 10, "l": 10, "r": 10},
},
},
)
for column in ["pop", "lifeExp", "gdpPercap"] if column in dff
]
if __name__ == '__main__':
app.run_server(debug=True)
```
The same behaviour occurs when running a simple datatable example in either python or R:
`Python` minimal example:
```
import dash
import dash_table
import pandas as pd
df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/solar.csv')
app = dash.Dash(__name__)
def generateDataTable(DT, type):
return(
dash_table.DataTable(
id = "cost-stats table" if type == "cost" else "procedure-stats-table",
columns=[{"name": i, "id": i} for i in DT.columns],
data = DT.to_dict("records"),
filtering = True,
sorting = True if type == "cost" else False,
sorting_type = "multi",
pagination_mode = "fe",
pagination_settings = {
"displayed pages": 1,
"current_page": 0,
"page_size": 5
},
navigation = "page",
style_cell = {
"background-color": "#171b26",
"color": "#7b7d8d",
"textOverflow": "ellipsis"
},
style_filter = {
"background-color": "#171b26",
"color": "7b7d8d"
}
)
)
app.layout = generateDataTable(df, "cost")
if __name__ == '__main__':
app.run_server(debug=True)
```
`R` minimal example:
```
library(dashR)
library(dashCoreComponents)
library(dashHtmlComponents)
library(dashTable)
df <- read.csv(
url(
'https://raw.githubusercontent.com/plotly/datasets/master/solar.csv'
),
check.names=FALSE,
stringsAsFactors=FALSE
)
generateDataTable <- function(DT, type = c("procedure", "cost")){
dashDataTable(
id = ifelse(
type == "cost",
"cost-stats-table",
"procedure-stats-table"
),
columns = lapply(
colnames(DT),
function(x){
list(name = x, id = x)
}
),
data = dashTable:::df_to_list(DT),
filtering = TRUE,
sorting = ifelse(
type == "cost",
FALSE,
TRUE
),
sorting_type = "multi",
pagination_mode = "fe",
pagination_settings = list(
displayed_pages = 1, current_page = 0, page_size = 5
),
navigation = "page",
style_cell = list(
backgroundColor = "#171b26",
color = "#7b7d8d",
textOverflow = "ellipsis"
),
style_filter = list(
backgroundColor = "#171b26",
color = "#7b7d8d"
)
)
}
app <- Dash$new()
app$layout(generateDataTable(df))
app$run_server(debug = TRUE)
```
Package versions:
```
Python 3.7.2
dash==0.41.0
dash-core-components==0.46.0
dash-html-components==0.15.0
dash-renderer==0.22.0
dash-table==3.6.0
```
This occurs in firefox 67.0 and chrome 74.0.3729.169
|
closed
|
2019-05-29T14:18:57Z
|
2019-06-04T18:07:21Z
|
https://github.com/plotly/dash-table/issues/448
|
[] |
sacul-git
| 2
|
dpgaspar/Flask-AppBuilder
|
rest-api
| 1,574
|
Invalid link in "about" of GitHub repo
|
Currently the "about" section has an invalid link to http://flaskappbuilder.pythonanywhere.com/:

Lets fix this and link to the docs: https://flask-appbuilder.readthedocs.io/en/latest/
@dpgaspar
|
closed
|
2021-02-25T22:55:30Z
|
2021-04-09T13:31:32Z
|
https://github.com/dpgaspar/Flask-AppBuilder/issues/1574
|
[] |
thesuperzapper
| 3
|
dpgaspar/Flask-AppBuilder
|
rest-api
| 1,559
|
export only selected data using the @action method.
|
Hello, I'm desperately trying to export only selected data using the @action method.
The code actually works quite well, but only for one query.
`@action("export", "Export", "Select Export?", "fa-file-excel-o", single=False)
def export(self, items):
if isinstance(items, list):
urltools.get_filter_args(self._filters)
order_column, order_direction = self.base_order
count, lst = self.datamodel.query(self._filters, order_column, order_direction)`
`csv = ""`
`for item in self.datamodel.get_values(items, self.list_columns):
csv += str(item) + '\n'
response = make_response(csv)
cd = 'attachment; filename=mycsv.csv'
response.headers['Content-Disposition'] = cd
response.mimetype='text/csv'
return response`
I don't know what to do next. I would like to export all of the selected data

|
closed
|
2021-02-07T17:47:31Z
|
2021-02-12T08:25:50Z
|
https://github.com/dpgaspar/Flask-AppBuilder/issues/1559
|
[] |
pUC19
| 0
|
proplot-dev/proplot
|
matplotlib
| 20
|
eps figures save massive object outside of figure
|
Not really sure how to debug this, but something in your figure saving creates vector objects that extend to seemingly infinity off the screen.
You can see here in my illustrator screenshot. The bar object can't be modified by illustrator really because it says transforming it will make it too large. The blue outline seen extends downward infinitely. If I save an eps from `matplotlib` the object is only the size of the bar.
<img width="1366" alt="Screen Shot 2019-06-12 at 11 25 21 AM" src="https://user-images.githubusercontent.com/8881170/59372567-0b924900-8d05-11e9-97c6-d32a6b434adc.png">
|
closed
|
2019-06-12T17:28:14Z
|
2020-05-19T16:52:26Z
|
https://github.com/proplot-dev/proplot/issues/20
|
[
"bug"
] |
bradyrx
| 4
|
pyg-team/pytorch_geometric
|
pytorch
| 9,769
|
Slack link no longer works
|
### 📚 Describe the documentation issue
Slack link seems to be no longer active @rus
### Suggest a potential alternative/fix
_No response_
|
closed
|
2024-11-09T01:47:55Z
|
2024-11-13T03:03:56Z
|
https://github.com/pyg-team/pytorch_geometric/issues/9769
|
[
"documentation"
] |
chiggly007
| 1
|
graphistry/pygraphistry
|
jupyter
| 363
|
[FEA] API personal key support
|
Need support for new api service keys:
- [x] `register(personal_key=...)`, used for files + datasets endpoints
- [x] add to `README.md`
+ testing uploads
|
open
|
2022-06-10T23:24:15Z
|
2023-02-10T09:12:05Z
|
https://github.com/graphistry/pygraphistry/issues/363
|
[
"enhancement",
"p2"
] |
lmeyerov
| 1
|
statsmodels/statsmodels
|
data-science
| 8,947
|
How to completely remove training data from model.
|
Dear developers of statsmodels,
First of all, thank you so much for creating such an amazing package.
I've recently used statsmodels (0.14.0) to make a GLM model and I would like to share it with others so that they can input their own data and make predictions.
However, I am prohibited from sharing the data I have used for training.
Therefor, I tried to remove the training data via calling `.save()` with `remove_data=True` , but when I checked the `.model.data.frame` attribute of the fitted model object, it still contained the training data.
To the best of my knowledge the data is also stored in `model.data.orig_exog` and `model.data.orig_endog` as mentioned in [issue7494](https://github.com/statsmodels/statsmodels/issues/7494), but I am not sure where else the training data maybe held.
If you could please enlighten on me how to completely remove training data from the fitted model object that would be great.
Many thanks in advance.
### Reproducible code
```
import numpy as np
import statsmodels.api as sm
import statsmodels.formula.api as smf
# load "growth curves pf pigs" dataset
my_data = sm.datasets.get_rdataset("dietox", "geepack").data
# make GLM model
my_formula = "Weight ~ Time"
my_family = sm.families.Gaussian(link=sm.families.links.identity())
model = smf.glm(formula=my_formula, data=my_data, family=my_family)
result = model.fit()
# save model with "remove_data=True"
result.save('test_model.pickle', remove_data=True)
new_result = sm.load('test_model.pickle')
print(new_result.model.data.frame)
```
|
open
|
2023-07-03T15:24:21Z
|
2023-07-11T06:45:17Z
|
https://github.com/statsmodels/statsmodels/issues/8947
|
[] |
hiroki32
| 1
|
plotly/dash-table
|
plotly
| 716
|
Cell selection does not restore previous cell background after selection
|
I'm using a Dash table component with a custom background color (grey) defined with style_data_conditional. When I select any cell of my table the background color change to the selection color (hot-pink). When I then select another cell, the cell previously selected becomes white instead than the color I defined (grey).
The color grey is restored only if the cell is redrawn either by a refresh or by a callback that update the table data model (this is an independent interaction)
I tried this in Firefox and Chrome and I'm using the 1.9.0 version of Dash
I think this is a bug and hope it can be fixed because it vanishes the custom table style.
Alternatively it would be also good to have the option to disable the cell selection mechanism.
Thanks
|
open
|
2020-03-06T12:04:25Z
|
2020-03-06T12:04:25Z
|
https://github.com/plotly/dash-table/issues/716
|
[] |
claudioiac
| 0
|
Urinx/WeixinBot
|
api
| 259
|
请问入群欢迎是怎么实现的
|
closed
|
2018-07-11T05:04:37Z
|
2018-07-13T07:44:32Z
|
https://github.com/Urinx/WeixinBot/issues/259
|
[] |
WellJay
| 1
|
|
litestar-org/litestar
|
pydantic
| 3,752
|
Enhancement: Allow multiple lifecycle hooks of each type to be run on any given request
|
### Summary
Only having one lifecycle hook per request makes it impossible to use multiple plugins which both set the same hook, or makes it easy to inadvertently break plugins by overriding a hook that they install and depend on (whether at the root application or a lower layer).
It also places in application code the responsibility of being aware of hooks installed by outer layers, and ensuring that any hooks at inner layers take responsibility for those outer hooks' functionality.
The simplest approach to addressing this is to allow each layer to define a list of hooks, and run all of them for each request (halting when any hook reaches a terminal state, for those hooks where this applies).
### Basic Example
Just as guards are defined as a list (where this list is merged between layers to assemble a final list of guards), under this proposal the same would be true of lifecycle hooks.
### Drawbacks and Impact
This is a breaking change from an API perspective: Documentation and test cases would need to be updated, plugins or other code assigning lifecycle hooks would need to be modified.
### Unresolved questions
Compared to the proposal embodied in #3748, this provides less flexibility -- hooks following that proposal can decide to invoke their parents either before or after themselves, or can modify results returned by parent hooks if they so choose.
_However_, with this reduction in flexibility there is also a substantive reduction in responsibility: using that proposal, a hook could inadvertently prevent other hooks from running, most concerningly by way of not simply implementing the newer interface.
|
open
|
2024-09-21T17:50:29Z
|
2025-03-20T15:54:55Z
|
https://github.com/litestar-org/litestar/issues/3752
|
[
"Enhancement"
] |
charles-dyfis-net
| 2
|
pytorch/pytorch
|
deep-learning
| 148,883
|
Pytorch2.7+ROCm6.3 is 34.55% slower than Pytorch2.6+ROCm6.2.4
|
The same hardware and software environment, only the versions of PyTorch+ROCm are different.
Use ComfyUI to run Hunyuan text to video:
ComfyUI:v0.3.24
ComfyUI plugin: teacache
49frames
480x960
20steps
CPU:i5-7500
GPU:AMD 7900XT 20GB
RAM:32GB
PyTorch2.6+ROCm6.2.4 Time taken: 348 seconds 14.7s/it
The VAE Decode Tiled node (parameters: 128 64 32 8) takes: 55 seconds
PyTorch2.7+ROCm6.3 Time taken: 387 seconds 15.66s/it**(11.21% slower)**
The VAE Decode Tiled node (parameters: 128 64 32 8) takes: 74 seconds**(34.55% slower)**
In addition, if the VAE node parameters are set to 256 64 64 8 (the default parameters for nvidia graphics cards), it will take a very long time and seem to be stuck but the program will not crash.The same situation occurs in both Pytorch 2.6 and 2.7.
I'm sorry I don't know what error message to submit for this discrepancy, but I can cooperate with the test and upload the specified information.
Thank you.
[ComfyUI_running_.json](https://github.com/user-attachments/files/19162936/ComfyUI_running_.json)
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
|
open
|
2025-03-10T12:56:34Z
|
2025-03-19T14:14:52Z
|
https://github.com/pytorch/pytorch/issues/148883
|
[
"module: rocm",
"triaged"
] |
testbug5577
| 6
|
igorbenav/fastcrud
|
sqlalchemy
| 15
|
Add Automatic Filters for Auto Generated Endpoints
|
**Is your feature request related to a problem? Please describe.**
I would like to be able to filter get_multi endpoint.
**Describe the solution you'd like**
I want to be filtering and searching in the get_multi endpoint. This is such a common feature in most CRUD system that I consider relevant. This is also available in flask-muck.
**Describe alternatives you've considered**
Some out-of-the-box solution are out there:
https://github.com/arthurio/fastapi-filter (sqlalchemy + mongo)
https://github.com/OleksandrZhydyk/FastAPI-SQLAlchemy-Filters (specific sqlalchemy)
**Additional context**
Add any other context or screenshots about the feature request here.
Ifnot being implemented could an example be provided?
|
closed
|
2024-02-04T13:19:43Z
|
2024-05-21T04:14:34Z
|
https://github.com/igorbenav/fastcrud/issues/15
|
[
"enhancement",
"Automatic Endpoint"
] |
AndreGuerra123
| 15
|
pydantic/FastUI
|
fastapi
| 32
|
Toggle switches
|
Similar to [this](https://getbootstrap.com/docs/5.1/forms/checks-radios/#switches), we should allow them as well as checkboxes in forms.
|
closed
|
2023-12-01T19:07:40Z
|
2023-12-04T13:07:19Z
|
https://github.com/pydantic/FastUI/issues/32
|
[
"good first issue",
"New Component"
] |
samuelcolvin
| 2
|
pytest-dev/pytest-html
|
pytest
| 7
|
AttributeError: 'TestReport' object has no attribute 'extra'
|
Tried to add the code below, but got error - AttributeError: 'TestReport' object has no attribute 'extra'
from py.xml import html
from html import extras
def pytest_runtest_makereport(**multicall**, item):
report = **multicall**.execute()
extra = getattr(report, 'extra', [])
if report.when == 'call':
xfail = hasattr(report, 'wasxfail')
if (report.skipped and xfail) or (report.failed and not xfail):
url = TestSetup.selenium.current_url
report.extra.append(extras.url(url))
screenshot = TestSetup.selenium.get_screenshot_as_base64()
report.extra.append(extras.image(screenshot, 'Screenshot'))
html = TestSetup.selenium.page_source.encode('utf-8')
report.extra.append(extra.text(html, 'HTML'))
report.extra.append(extra.html(html.div('Additional HTML')))
report.extra = extra
return report
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "D:\Python27\lib\site-packages_pytest\main.py", line 84, in wrap_session
INTERNALERROR> doit(config, session)
INTERNALERROR> File "D:\Python27\lib\site-packages_pytest\main.py", line 122, in _main
INTERNALERROR> config.hook.pytest_runtestloop(session=session)
...........................
pytest_runtest_makereport
INTERNALERROR> report.extra.append(extra.text(item._obj.__doc__.strip(), 'HTML'))
INTERNALERROR> AttributeError: 'TestReport' object has no attribute 'extra'
|
closed
|
2015-05-26T14:27:45Z
|
2015-05-26T15:27:00Z
|
https://github.com/pytest-dev/pytest-html/issues/7
|
[] |
reddypdl
| 1
|
nschloe/tikzplotlib
|
matplotlib
| 401
|
Legend title is not converted to tikz
|
When creating a legend with a title, the title does not transform to tikz code.
Python code:
```
import matplotlib.pyplot as plt
from tikzplotlib import save as tikz_save
x=[-1,1]
y=[-1,1]
plt.figure()
plt.plot(x,y,label='my legend')
#plt.xlim([1e-3,5])
#plt.ylim([1e-3,1e2])
plt.legend(title='my title')
#plt.savefig("test.png")
tikz_save("test.tikz",encoding ='utf-8')
```
Tikz output:
```
% This file was created by tikzplotlib v0.9.1.
\begin{tikzpicture}
\definecolor{color0}{rgb}{0.12156862745098,0.466666666666667,0.705882352941177}
\begin{axis}[
legend cell align={left},
legend style={fill opacity=0.8, draw opacity=1, text opacity=1, at={(0.03,0.97)}, anchor=north west, draw=white!80!black},
tick align=outside,
tick pos=both,
x grid style={white!69.0196078431373!black},
xmin=-1.1, xmax=1.1,
xtick style={color=black},
y grid style={white!69.0196078431373!black},
ymin=-1.1, ymax=1.1,
ytick style={color=black}
]
\addplot [semithick, color0]
table {%
-1 -1
1 1
};
\addlegendentry{my legend}
\end{axis}
\end{tikzpicture}
```
Expected output:

Obtained output:

|
open
|
2020-04-12T11:30:57Z
|
2023-07-25T12:12:37Z
|
https://github.com/nschloe/tikzplotlib/issues/401
|
[] |
Mpedrosab
| 2
|
kennethreitz/responder
|
graphql
| 323
|
dealing with index.html location not in static directory
|
Responder is a great package, loving it so far!
Wanted to share some feedback on trouble that I ran into while trying to trying to deal with SPA, specifically dealing with serving up a React SPA bootstrapped by Create-React-App. The default webpack build scripts put the index.html in the parent folder of the static content.
```
/build
|
-index.html
|
--- /static
|
-css/
|
- js/
|
- media/
```
Because of this deployed layout, I could not figure out how to point to the index.html file and the static directory. Is there any way to point to static directory and index.html separately for add_route method?
For now, my workaround is to fix by moving the index.html file into the static folder, but perhaps there are better ways?
Here is my code for responder with the directory arguments:
```
api = responder.API(static_dir="../build/static", static_route="/static", templates_dir="../build")
api.add_route("/", static=True)
if __name__ == '__main__':
api.run(port=3001)
```
Thanks!
|
closed
|
2019-02-28T23:55:16Z
|
2019-03-10T14:44:13Z
|
https://github.com/kennethreitz/responder/issues/323
|
[
"discussion"
] |
KenanHArik
| 4
|
explosion/spaCy
|
deep-learning
| 12,807
|
Importing spacy (or thinc) breaks dot product
|
A very annoying bug that it took me forever to track down ... I'm at a loss for what might be going on here.
## How to reproduce the behaviour
```
import numpy as np
# works
small = np.random.randn(5, 6)
small.T @ small
# works
larger = np.random.randn(100, 110)
larger.T @ larger
import spacy # or import thinc
# works
small.T @ small
# works
larger + larger
# hangs forever, max CPU usage
larger.T @ larger # larger.T.dot(larger) also hangs
```
## Your Environment
- **spaCy version:** 3.5.3 (also occurs under 3.6.0)
- **Platform:** macOS-13.4.1-x86_64-i386-64bit
- **Python version:** 3.11.4
- **Pipelines:** de_core_news_md (3.5.0), en_core_web_trf (3.5.0), en_core_web_sm (3.5.0), en_core_web_md (3.5.0)
I tried the same thing on another computer (Raspberry Pi) and it worked flawlessly, but on my Macbook, it hangs.
I can move this to the thinc github if you prefer.
|
closed
|
2023-07-08T22:05:46Z
|
2023-08-19T00:02:07Z
|
https://github.com/explosion/spaCy/issues/12807
|
[
"third-party"
] |
jona-sassenhagen
| 17
|
freqtrade/freqtrade
|
python
| 11,053
|
freqtrade 2024.10 backtesting short is not work……?
|
```
{
"$schema": "https://schema.freqtrade.io/schema.json",
"max_open_trades": 100,
"stake_currency": "USDT",
"stake_amount": "unlimited",
"tradable_balance_ratio": 0.99,
"fiat_display_currency": "USD",
"dry_run": true,
"dry_run_wallet": 1000,
"cancel_open_orders_on_exit": false,
"trading_mode": "futures",
"margin_mode": "isolated",
"unfilledtimeout": {
"entry": 10,
"exit": 10,
"exit_timeout_count": 0,
"unit": "minutes"
},
"entry_pricing": {
"price_side": "same",
"use_order_book": true,
"order_book_top": 1,
"price_last_balance": 0.0,
"check_depth_of_market": {
"enabled": false,
"bids_to_ask_delta": 1
}
},
"exit_pricing": {
"price_side": "same",
"use_order_book": true,
"order_book_top": 1
},
"exchange": {
"name": "binance",
"key": "",
"secret": "",
"ccxt_config": {
"httpsProxy": "http://127.0.0.1:7890",
"wsProxy": "http://127.0.0.1:7890"
},
"ccxt_async_config": {
"httpsProxy": "http://127.0.0.1:7890",
"wsProxy": "http://127.0.0.1:7890"
},
"pair_whitelist": [
"BTC/USDT:USDT"
],
"pair_blacklist": [
"BNB/.*"
]
},
"pairlists": [
{
"method": "StaticPairList"
}
],
"api_server": {
"enabled": true,
"listen_ip_address": "127.0.0.1",
"listen_port": 8080,
"verbosity": "error",
"enable_openapi": false,
"jwt_secret_key": "cea5c90043a8d876068a86d813c0f510a8cbf28ef6924fae640265f988ebe153",
"ws_token": "NvP7a0z1y3dFAWxVeZqqpIZrTucdG39oiw",
"CORS_origins": [],
"username": "freqtrader",
"password": "1"
},
"bot_name": "freqtrade",
"initial_state": "running",
"force_entry_enable": false,
"internals": {
"process_throttle_secs": 5
}
}
```
``` python
from freqtrade.strategy.interface import IStrategy
from pandas import DataFrame
import talib.abstract as ta
class SimpleShortStrategy(IStrategy):
timeframe = "5m"
# 技术指标参数
rsi_period = 14
cci_period = 20
short_sma_period = 50
long_sma_period = 200
stoploss = -0.05 # 最大止损 5%
minimal_roi = {
"0": 0.10 # 最低 10% 的盈利目标
}
can_short = True
def populate_indicators(self, dataframe: DataFrame, metadata: dict) -> DataFrame:
"""
Populate indicators for the strategy.
"""
# RSI 指标
dataframe['rsi'] = ta.RSI(dataframe, timeperiod=self.rsi_period)
# CCI 指标
dataframe['cci'] = ta.CCI(dataframe, timeperiod=self.cci_period)
# 短期和长期均线
dataframe['sma_short'] = ta.SMA(dataframe, timeperiod=self.short_sma_period)
dataframe['sma_long'] = ta.SMA(dataframe, timeperiod=self.long_sma_period)
return dataframe
def populate_entry_trend(self, dataframe: DataFrame, metadata: dict) -> DataFrame:
"""
Generate entry signals for short trades.
"""
dataframe.loc[
(
(dataframe['sma_short'] < dataframe['sma_long'])
# (dataframe['rsi'] > 70) & # RSI 超买信号
# (dataframe['cci'] > 100) # CCI 表明市场可能反转
),
'enter_short'
] = 1 # 生成做空信号
return dataframe
def populate_exit_trend(self, dataframe: DataFrame, metadata: dict) -> DataFrame:
"""
Generate exit signals for short trades.
"""
dataframe.loc[
(
(dataframe['sma_short'] > dataframe['sma_long']) | # 短期均线突破长期均线(趋势反转)
(dataframe['rsi'] < 50) # RSI 回到中性水平
),
'exit_short'
] = 1 # 生成平仓信号
return dataframe
```
```
freqtrade backtesting -c user_data/hyperopt-futures-5m-btc.json --strategy SimpleShortStrategy --strategy-path user_data/strategies_short/5m --timeframe 5m --timerange 20210301-20230301 --eps
2024-12-07 20:42:21,656 - freqtrade - INFO - freqtrade 2024.10
2024-12-07 20:42:21,884 - numexpr.utils - INFO - NumExpr defaulting to 12 threads.
2024-12-07 20:42:24,538 - freqtrade.configuration.load_config - INFO - Using config: user_data/hyperopt-futures-5m-btc.json ...
2024-12-07 20:42:24,538 - freqtrade.loggers - INFO - Verbosity set to 0
2024-12-07 20:42:24,539 - freqtrade.configuration.configuration - INFO - Using additional Strategy lookup path: user_data/strategies_short/5m
2024-12-07 20:42:24,539 - freqtrade.configuration.configuration - INFO - Parameter -i/--timeframe detected ... Using timeframe: 5m ...
2024-12-07 20:42:24,539 - freqtrade.configuration.configuration - INFO - Parameter --enable-position-stacking detected ...
2024-12-07 20:42:24,539 - freqtrade.configuration.configuration - INFO - Using max_open_trades: 100 ...
2024-12-07 20:42:24,539 - freqtrade.configuration.configuration - INFO - Parameter --timerange detected: 20210301-20230301 ...
2024-12-07 20:42:24,540 - freqtrade.configuration.configuration - INFO - Using user-data directory: E:\others\freqtrade\user_data ...
2024-12-07 20:42:24,541 - freqtrade.configuration.configuration - INFO - Using data directory: E:\others\freqtrade\user_data\data\binance ...
2024-12-07 20:42:24,541 - freqtrade.configuration.configuration - INFO - Overriding timeframe with Command line argument
2024-12-07 20:42:24,541 - freqtrade.configuration.configuration - INFO - Parameter --cache=day detected ...
2024-12-07 20:42:24,541 - freqtrade.configuration.configuration - INFO - Filter trades by timerange: 20210301-20230301
2024-12-07 20:42:24,542 - freqtrade.exchange.check_exchange - INFO - Checking exchange...
2024-12-07 20:42:24,553 - freqtrade.exchange.check_exchange - INFO - Exchange "binance" is officially supported by the Freqtrade development team.
2024-12-07 20:42:24,553 - freqtrade.configuration.configuration - INFO - Using pairlist from configuration.
2024-12-07 20:42:24,553 - freqtrade.configuration.config_validation - INFO - Validating configuration ...
2024-12-07 20:42:24,556 - freqtrade.commands.optimize_commands - INFO - Starting freqtrade in Backtesting mode
2024-12-07 20:42:24,556 - freqtrade.exchange.exchange - INFO - Instance is running with dry_run enabled
2024-12-07 20:42:24,556 - freqtrade.exchange.exchange - INFO - Using CCXT 4.4.24
2024-12-07 20:42:24,557 - freqtrade.exchange.exchange - INFO - Applying additional ccxt config: {'options': {'defaultType': 'swap'}, 'httpsProxy': 'http://127.0.0.1:7890', 'wsProxy': 'http://127.0.0.1:7890'}
2024-12-07 20:42:24,566 - freqtrade.exchange.exchange - INFO - Applying additional ccxt config: {'options': {'defaultType': 'swap'}, 'httpsProxy': 'http://127.0.0.1:7890', 'wsProxy': 'http://127.0.0.1:7890'}
2024-12-07 20:42:24,578 - freqtrade.exchange.exchange - INFO - Using Exchange "Binance"
2024-12-07 20:42:26,831 - freqtrade.resolvers.exchange_resolver - INFO - Using resolved exchange 'Binance'...
2024-12-07 20:42:27,446 - freqtrade.resolvers.iresolver - INFO - Using resolved strategy SimpleShortStrategy from 'E:\others\freqtrade\user_data\strategies_short\5m\SimpleShortStrategy.py'...
2024-12-07 20:42:27,446 - freqtrade.strategy.hyper - INFO - Found no parameter file.
2024-12-07 20:42:27,446 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'timeframe' with value in config file: 5m.
2024-12-07 20:42:27,446 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'stake_currency' with value in config file: USDT.
2024-12-07 20:42:27,447 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'stake_amount' with value in config file: unlimited.
2024-12-07 20:42:27,447 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'unfilledtimeout' with value in config file: {'entry': 10, 'exit': 10, 'exit_timeout_count': 0, 'unit': 'minutes'}.
2024-12-07 20:42:27,447 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'max_open_trades' with value in config file: 100.
2024-12-07 20:42:27,447 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using minimal_roi: {'0': 0.1}
2024-12-07 20:42:27,447 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using timeframe: 5m
2024-12-07 20:42:27,447 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using stoploss: -0.05
2024-12-07 20:42:27,448 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using trailing_stop: False
2024-12-07 20:42:27,448 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using trailing_stop_positive_offset: 0.0
2024-12-07 20:42:27,448 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using trailing_only_offset_is_reached: False
2024-12-07 20:42:27,448 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using use_custom_stoploss: False
2024-12-07 20:42:27,448 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using process_only_new_candles: True
2024-12-07 20:42:27,448 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using order_types: {'entry': 'limit', 'exit': 'limit', 'stoploss': 'limit', 'stoploss_on_exchange': False, 'stoploss_on_exchange_interval': 60}
2024-12-07 20:42:27,449 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using order_time_in_force: {'entry': 'GTC', 'exit': 'GTC'}
2024-12-07 20:42:27,449 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using stake_currency: USDT
2024-12-07 20:42:27,449 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using stake_amount: unlimited
2024-12-07 20:42:27,449 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using startup_candle_count: 0
2024-12-07 20:42:27,449 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using unfilledtimeout: {'entry': 10, 'exit': 10, 'exit_timeout_count': 0, 'unit': 'minutes'}
2024-12-07 20:42:27,449 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using use_exit_signal: True
2024-12-07 20:42:27,449 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using exit_profit_only: False
2024-12-07 20:42:27,449 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using ignore_roi_if_entry_signal: False
2024-12-07 20:42:27,449 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using exit_profit_offset: 0.0
2024-12-07 20:42:27,449 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using disable_dataframe_checks: False
2024-12-07 20:42:27,449 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using ignore_buying_expired_candle_after: 0
2024-12-07 20:42:27,450 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using position_adjustment_enable: False
2024-12-07 20:42:27,450 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using max_entry_position_adjustment: -1
2024-12-07 20:42:27,450 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using max_open_trades: 100
2024-12-07 20:42:27,450 - freqtrade.configuration.config_validation - INFO - Validating configuration ...
2024-12-07 20:42:27,465 - freqtrade.resolvers.iresolver - INFO - Using resolved pairlist StaticPairList from 'E:\others\freqtrade\freqtrade\plugins\pairlist\StaticPairList.py'...
2024-12-07 20:42:27,473 - freqtrade.optimize.backtesting - INFO - Using fee 0.0500% - worst case fee from exchange (lowest tier).
2024-12-07 20:42:27,660 - freqtrade.optimize.backtesting - INFO - Loading data from 2021-03-01 00:00:00 up to 2023-03-01 00:00:00 (730 days).
2024-12-07 20:42:27,689 - freqtrade.optimize.backtesting - INFO - Dataload complete. Calculating indicators
2024-12-07 20:42:27,691 - freqtrade.data.btanalysis - INFO - Loading backtest result from E:\others\freqtrade\user_data\backtest_results\backtest-result-2024-12-07_20-28-13.json
2024-12-07 20:42:27,692 - freqtrade.optimize.backtesting - INFO - Reusing result of previous backtest for SimpleShortStrategy
Result for strategy SimpleShortStrategy
BACKTESTING REPORT
┏━━━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Pair ┃ Trades ┃ Avg Profit % ┃ Tot Profit USDT ┃ Tot Profit % ┃ Avg Duration ┃ Win Draw Loss Win% ┃
┡━━━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━┩
│ BTC/USDT:USDT │ 0 │ 0.0 │ 0.000 │ 0.0 │ 0:00 │ 0 0 0 0 │
│ TOTAL │ 0 │ 0.0 │ 0.000 │ 0.0 │ 0:00 │ 0 0 0 0 │
└───────────────┴────────┴──────────────┴─────────────────┴──────────────┴──────────────┴────────────────────────┘
LEFT OPEN TRADES REPORT
┏━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Pair ┃ Trades ┃ Avg Profit % ┃ Tot Profit USDT ┃ Tot Profit % ┃ Avg Duration ┃ Win Draw Loss Win% ┃
┡━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━┩
│ TOTAL │ 0 │ 0.0 │ 0.000 │ 0.0 │ 0:00 │ 0 0 0 0 │
└───────┴────────┴──────────────┴─────────────────┴──────────────┴──────────────┴────────────────────────┘
ENTER TAG STATS
┏━━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Enter Tag ┃ Entries ┃ Avg Profit % ┃ Tot Profit USDT ┃ Tot Profit % ┃ Avg Duration ┃ Win Draw Loss Win% ┃
┡━━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━┩
│ TOTAL │ 0 │ 0.0 │ 0.000 │ 0.0 │ 0:00 │ 0 0 0 0 │
└───────────┴─────────┴──────────────┴─────────────────┴──────────────┴──────────────┴────────────────────────┘
EXIT REASON STATS
┏━━━━━━━━━━━━━┳━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Exit Reason ┃ Exits ┃ Avg Profit % ┃ Tot Profit USDT ┃ Tot Profit % ┃ Avg Duration ┃ Win Draw Loss Win% ┃
┡━━━━━━━━━━━━━╇━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━┩
│ TOTAL │ 0 │ 0.0 │ 0.000 │ 0.0 │ 0:00 │ 0 0 0 0 │
└─────────────┴───────┴──────────────┴─────────────────┴──────────────┴──────────────┴────────────────────────┘
MIXED TAG STATS
┏━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Enter Tag ┃ Exit Reason ┃ Trades ┃ Avg Profit % ┃ Tot Profit USDT ┃ Tot Profit % ┃ Avg Duration ┃ Win Draw Loss Win% ┃
┡━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━┩
│ TOTAL │ │ 0 │ 0.0 │ 0.000 │ 0.0 │ 0:00 │ 0 0 0 0 │
└───────────┴─────────────┴────────┴──────────────┴─────────────────┴──────────────┴──────────────┴────────────────────────┘
No trades made. Your starting balance was 1000 USDT, and your stake was unlimited.
Backtested 2021-03-01 00:00:00 -> 2023-03-01 00:00:00 | Max open trades : 1
STRATEGY SUMMARY
┏━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓
┃ Strategy ┃ Trades ┃ Avg Profit % ┃ Tot Profit USDT ┃ Tot Profit % ┃ Avg Duration ┃ Win Draw Loss Win% ┃ Drawdown ┃
┡━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩
│ SimpleShortStrategy │ 0 │ 0.00 │ 0.000 │ 0.0 │ 0:00 │ 0 0 0 0 │ 0 USDT 0.00% │
└─────────────────────┴────────┴──────────────┴─────────────────┴──────────────┴──────────────┴────────────────────────┴───────────────┘
```
|
closed
|
2024-12-07T12:46:38Z
|
2024-12-10T03:21:43Z
|
https://github.com/freqtrade/freqtrade/issues/11053
|
[
"Question"
] |
Toney
| 2
|
taverntesting/tavern
|
pytest
| 791
|
how to include saved value in the next stage request url
|
We have a service with the URL, https://www.domain.com/parallel/tests/uuid
I would like to run the test stage(name: Verify tests get API) with the saved value `newtest_uuid`
How can I use `newtest_uuid` in the second stage?
```
includes:
- !include includes.yaml
stages:
- name: Verify tests post API
request:
url: "{HOST:s}/parallel/tests/"
method: POST
json:
name: 'co-test-9'
apptype: 'parallel'
user: 'wangxi'
product: 'centos'
build : 'ob-123'
resolution: '1600x1200'
start_url: 'https://www.github.com/'
locales: "['192.168.0.1']"
response:
strict:
- json:off
status_code: 200
save:
json:
newtest_uuid: "testcase.uuid"
- name: Verify tests get API
request:
url: "{HOST:s}/parallel/tests/{newtest_uuid}"
method: GET
response:
strict:
- json:off
status_code: 200
```
I did this, but it doesn't work, and got errors
```
- name: Verify tests get API
request:
url: !force_format_include"{HOST}/parallel/tests/{newtest_uuid}"
method: GET
response:
strict:
- json:off
status_code: 200
```
How can i use the saved `newtest_uuid` in the second stage?
Thanks
|
closed
|
2022-06-15T09:19:00Z
|
2022-06-16T03:09:09Z
|
https://github.com/taverntesting/tavern/issues/791
|
[] |
colinshin
| 2
|
python-restx/flask-restx
|
api
| 399
|
the first time to open the swagger page will be very slow
|
After updating flask2.0.1 and restx0.5, start app.run(), the first time to open the swagger page will be very slow, about a few tens of seconds to a few minutes to wait, this problem is caused by too many registered routes, I tried to register only a few routes will not cause a problem. Why is the difference between 0.5 and the 0.3 version so much, the 0.3 version will not have this problem, what exactly caused this problem, can anyone tell me?
|
open
|
2021-12-27T10:18:46Z
|
2021-12-27T10:18:46Z
|
https://github.com/python-restx/flask-restx/issues/399
|
[
"question"
] |
sbigtree
| 0
|
AUTOMATIC1111/stable-diffusion-webui
|
pytorch
| 16,031
|
[Bug]: cannot load SD2 checkpoint after performance update in dev branch
|
After the huge performance update in dev branch it is not possible to load sd2 model, while on master branch it works:
```
RuntimeError: mat1 and mat2 must have the same dtype, but got Half and Float
```
```
Loading weights [3f067a1b94] from /home/user/ai-apps/stable-diffusion-webui/models/Stable-diffusion/sd_v2-1_turbo.safetensors
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
File "/usr/lib/python3.11/threading.py", line 1002, in _bootstrap
self._bootstrap_inner()
File "/usr/lib/python3.11/threading.py", line 1045, in _bootstrap_inner
self.run()
File "/usr/lib/python3.11/threading.py", line 982, in run
self._target(*self._args, **self._kwargs)
File "/home/user/ai-apps/stable-diffusion-webui/modules/initialize.py", line 149, in load_model
shared.sd_model # noqa: B018
File "/home/user/ai-apps/stable-diffusion-webui/modules/shared_items.py", line 175, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "/home/user/ai-apps/stable-diffusion-webui/modules/sd_models.py", line 648, in get_sd_model
load_model()
File "/home/user/ai-apps/stable-diffusion-webui/modules/sd_models.py", line 736, in load_model
checkpoint_config = sd_models_config.find_checkpoint_config(state_dict, checkpoint_info)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/ai-apps/stable-diffusion-webui/modules/sd_models_config.py", line 119, in find_checkpoint_config
return guess_model_config_from_state_dict(state_dict, info.filename)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/ai-apps/stable-diffusion-webui/modules/sd_models_config.py", line 91, in guess_model_config_from_state_dict
elif is_using_v_parameterization_for_sd2(sd):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/ai-apps/stable-diffusion-webui/modules/sd_models_config.py", line 64, in is_using_v_parameterization_for_sd2
out = (unet(x_test, torch.asarray([999], device=device), context=test_cond) - x_test).mean().item()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/ai-apps/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/ai-apps/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/ai-apps/stable-diffusion-webui/modules/sd_unet.py", line 91, in UNetModel_forward
return original_forward(self, x, timesteps, context, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/ai-apps/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 789, in forward
emb = self.time_embed(t_emb)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/ai-apps/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/ai-apps/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/ai-apps/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/container.py", line 215, in forward
input = module(input)
^^^^^^^^^^^^^
File "/home/user/ai-apps/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/ai-apps/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/ai-apps/stable-diffusion-webui/extensions-builtin/Lora/networks.py", line 527, in network_Linear_forward
return originals.Linear_forward(self, input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/ai-apps/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: mat1 and mat2 must have the same dtype, but got Half and Float
Stable diffusion model failed to load
```
|
open
|
2024-06-16T15:03:44Z
|
2024-06-23T15:23:07Z
|
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16031
|
[
"bug-report"
] |
light-and-ray
| 5
|
dask/dask
|
scikit-learn
| 10,949
|
Issue repartitioning a time series by frequency when loaded from parquet file
|
**Describe the issue**:
When loading a parquet file that has a datetime index, I can't repartition based on frequency, getting the following error:
```
Traceback (most recent call last):
File "/Users/.../gitrepos/dask-exp/test_dask_issue.py", line 19, in <module>
df2 = df2.repartition(freq="1D")
File "/Users/.../miniconda3/envs/dask/lib/python3.10/site-packages/dask_expr/_collection.py", line 1184, in repartition
raise TypeError("Can only repartition on frequency for timeseries")
TypeError: Can only repartition on frequency for timeseries
```
Despite the fact the loaded dataframe from parquet file having `datetime64[ns]` dtype.
Note that time series generator dataframe can be repartitioned by frequency.
**Minimal Complete Verifiable Example**:
```python
import dask
dask.config.set({'dataframe.query-planning': True})
import dask.dataframe as dd
df1 = dask.datasets.timeseries(
start="2000",
end="2001",
freq="1h",
seed=1,
)
df1 = df1.repartition(freq="1ME")
df1.to_parquet("test")
df2 = dd.read_parquet(
"test/*parquet",
index="timestamp",
columns=["x", "y"]
)
print(df2.index.dtype)
df2 = df2.repartition(freq="1D")
print(df2.compute())
```
**Anything else we need to know?**:
Looking to repartition data loaded via parquet for efficient time series based queries. Default partition results in larger than need memory bloatedness.
**Environment**:
- Dask version: 2024.2.1
- Python version: 3.10.13
- Operating System: Mac OSX
- Install method (conda, pip, source): pip
|
open
|
2024-02-24T18:30:24Z
|
2024-04-04T12:56:22Z
|
https://github.com/dask/dask/issues/10949
|
[
"dataframe"
] |
pvaezi
| 5
|
grillazz/fastapi-sqlalchemy-asyncpg
|
pydantic
| 98
|
SQLAlchemy Engine disposal
|
I've come across your project and I really appreciate what you've done.
I have a question that bugs me for a long time and I've never seen any project addressing it, including yours.
When you create an `AsyncEngine`, you are supposed to [dispose](https://docs.sqlalchemy.org/en/13/core/connections.html#engine-disposal) it, ideally in `shutdown` event. However, I never see anyone doing it in practice. Is it because you rely on GC of Python to clean it for you?
|
closed
|
2023-07-13T19:17:29Z
|
2023-07-20T08:25:35Z
|
https://github.com/grillazz/fastapi-sqlalchemy-asyncpg/issues/98
|
[] |
hmbui-noze
| 1
|
pydantic/pydantic
|
pydantic
| 11,335
|
Unable to pip install pydantic
|
Hi,
I was trying to set up a repository from source that had `pydantic >= 2.9.0`, but I keep getting the following error -
<img width="917" alt="Image" src="https://github.com/user-attachments/assets/e0410efc-5fb7-494e-80e4-96231fff1489" />
I checked my internet connection a couple of times just before posting this issue. Can you check if the new release was deployed correctly? Or is anyone else facing this issue?
|
closed
|
2025-01-24T04:27:27Z
|
2025-01-24T04:51:19Z
|
https://github.com/pydantic/pydantic/issues/11335
|
[] |
whiz-Tuhin
| 1
|
allenai/allennlp
|
data-science
| 4,855
|
Models: missing None check in PrecoReader's text_to_instance method.
|
<!--
Please fill this template entirely and do not erase any of it.
We reserve the right to close without a response bug reports which are incomplete.
If you have a question rather than a bug, please ask on [Stack Overflow](https://stackoverflow.com/questions/tagged/allennlp) rather than posting an issue here.
-->
## Checklist
<!-- To check an item on the list replace [ ] with [x]. -->
- [x] I have verified that the issue exists against the `master` branch of AllenNLP.
- [x] I have read the relevant section in the [contribution guide](https://github.com/allenai/allennlp/blob/master/CONTRIBUTING.md#bug-fixes-and-new-features) on reporting bugs.
- [x] I have checked the [issues list](https://github.com/allenai/allennlp/issues) for similar or identical bug reports.
- [x] I have checked the [pull requests list](https://github.com/allenai/allennlp/pulls) for existing proposed fixes.
- [x] I have checked the [CHANGELOG](https://github.com/allenai/allennlp/blob/master/CHANGELOG.md) and the [commit log](https://github.com/allenai/allennlp/commits/master) to find out if the bug was already fixed in the master branch.
- [x] I have included in the "Description" section below a traceback from any exceptions related to this bug.
- [x] I have included in the "Related issues or possible duplicates" section beloew all related issues and possible duplicate issues (If there are none, check this box anyway).
- [x] I have included in the "Environment" section below the name of the operating system and Python version that I was using when I discovered this bug.
- [x] I have included in the "Environment" section below the output of `pip freeze`.
- [x] I have included in the "Steps to reproduce" section below a minimally reproducible example.
## Description
Hi,
I think a `None` check is missing at that [line](https://github.com/allenai/allennlp-models/blob/ea1f71c79c329db1b66d9db79f0eaa39d2fd2857/allennlp_models/coref/dataset_readers/preco.py#L94) in `PrecoReader`.
According to the function argument list, and the subsequent call to `make_coref_instance`, `clusters` should be allowed to be `None`.
A typical use-case would be e.g. inference where we don't have any info about the clusters.
<details>
<summary><b>Python traceback:
</b></summary>
<p>
<!-- Paste the traceback from any exception (if there was one) in between the next two lines below -->
```
Traceback (most recent call last):
File "/home/fco/coreference/bug.py", line 15, in <module>
instance = reader.text_to_instance(sentences)
File "/home/fco/anaconda3/envs/coref/lib/python3.8/site-packages/allennlp_models/coref/dataset_readers/preco.py", line 94, in text_to_instance
for cluster in gold_clusters:
TypeError: 'NoneType' object is not iterable
```
</p>
</details>
## Related issues or possible duplicates
- None
## Environment
<!-- Provide the name of operating system below (e.g. OS X, Linux) -->
OS: Ubuntu 18.04.3 LTS
<!-- Provide the Python version you were using (e.g. 3.7.1) -->
Python version: 3.8.5
<details>
<summary><b>Output of <code>pip freeze</code>:</b></summary>
<p>
<!-- Paste the output of `pip freeze` in between the next two lines below -->
```
absl-py==0.11.0
allennlp==1.2.0
allennlp-models==1.2.0
attrs==20.3.0
blis==0.4.1
boto3==1.16.14
botocore==1.19.14
cachetools==4.1.1
catalogue==1.0.0
certifi==2020.6.20
chardet==3.0.4
click==7.1.2
conllu==4.2.1
cymem==2.0.4
en-core-web-sm @ https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.3.1/en_core_web_sm-2.3.1.tar.gz
filelock==3.0.12
ftfy==5.8
future==0.18.2
google-auth==1.23.0
google-auth-oauthlib==0.4.2
grpcio==1.33.2
h5py==3.1.0
idna==2.10
importlib-metadata==2.0.0
iniconfig==1.1.1
jmespath==0.10.0
joblib==0.17.0
jsonnet==0.16.0
jsonpickle==1.4.1
Markdown==3.3.3
murmurhash==1.0.4
nltk==3.5
numpy==1.19.4
oauthlib==3.1.0
overrides==3.1.0
packaging==20.4
pandas==1.1.4
plac==1.1.3
pluggy==0.13.1
preshed==3.0.4
protobuf==3.13.0
py==1.9.0
py-rouge==1.1
pyasn1==0.4.8
pyasn1-modules==0.2.8
pyconll==2.3.3
pyparsing==2.4.7
PySocks==1.7.1
pytest==6.1.2
python-dateutil==2.8.1
pytz==2020.4
regex==2020.10.28
requests==2.24.0
requests-oauthlib==1.3.0
rsa==4.6
s3transfer==0.3.3
sacremoses==0.0.43
scikit-learn==0.23.2
scipy==1.5.4
sentencepiece==0.1.94
six==1.15.0
spacy==2.3.2
srsly==1.0.3
tensorboard==2.4.0
tensorboard-plugin-wit==1.7.0
tensorboardX==2.1
thinc==7.4.1
threadpoolctl==2.1.0
tokenizers==0.9.2
toml==0.10.2
torch==1.7.0
tqdm==4.51.0
transformers==3.4.0
tweepy==3.9.0
typing==3.7.4.3
typing-extensions==3.7.4.3
urllib3==1.25.11
wasabi==0.8.0
wcwidth==0.2.5
Werkzeug==1.0.1
word2number==1.1
zipp==3.4.0
```
</p>
</details>
## Steps to reproduce
<details>
<summary><b>Example source:</b></summary>
<p>
<!-- Add a fully runnable example in between the next two lines below that will reproduce the bug -->
```python
import spacy
from allennlp.data.token_indexers import SingleIdTokenIndexer, TokenCharactersIndexer
from allennlp_models.coref import PrecoReader
my_text = "Night you. Subdue creepeth cattle creeping living lesser."
sp = spacy.load("en_core_web_sm")
doc = sp(my_text)
sentences = [[token.text for token in sent] for sent in doc.sents]
reader = PrecoReader(max_span_width=10, token_indexers={"tokens": SingleIdTokenIndexer(),
"token_characters": TokenCharactersIndexer()})
instance = reader.text_to_instance(sentences)
```
</p>
</details>
|
closed
|
2020-12-09T14:16:36Z
|
2020-12-10T20:30:06Z
|
https://github.com/allenai/allennlp/issues/4855
|
[
"bug"
] |
frcnt
| 0
|
jumpserver/jumpserver
|
django
| 14,140
|
[Feature] ldap登录需要修改初始密码问题
|
### 产品版本
v3.10.9
### 版本类型
- [ ] 社区版
- [X] 企业版
- [ ] 企业试用版
### 安装方式
- [ ] 在线安装 (一键命令安装)
- [ ] 离线包安装
- [ ] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] 源码安装
### ⭐️ 需求描述
ldap认证如果域控账号设置了初始密码,登录之后需要修改初始密码,我登录其他用域控密码的系统都会提示修改初始密码,同时弹出个修改密码的框,可是我登录堡垒机的时候没提示我修改初始密码,就直接提示密码错误,这个能不能改一下,让堡垒机登录域控账号的时候提示修改初始密码,同时弹出修改密码的框
### 解决方案
目前在对接ldap需要修改初始密码的,能够有修改密码入口吗?否则会报认证失败,不能登录。
### 补充信息
_No response_
|
closed
|
2024-09-13T06:36:14Z
|
2024-10-10T05:56:26Z
|
https://github.com/jumpserver/jumpserver/issues/14140
|
[
"⭐️ Feature Request"
] |
guoheng888
| 1
|
microsoft/unilm
|
nlp
| 1,523
|
All download links failed: vqkd_encoder pre-trained weight of beit2
|
Please refer to https://github.com/microsoft/unilm/blob/master/beit2/TOKENIZER.md
|
closed
|
2024-04-12T13:40:18Z
|
2024-04-12T18:08:14Z
|
https://github.com/microsoft/unilm/issues/1523
|
[] |
JoshuaChou2018
| 2
|
NVIDIA/pix2pixHD
|
computer-vision
| 206
|
THCTensorScatterGather.cu line=380 error=59 : device-side assert triggered when using labels and nyuv2 dataset
|
Hello,
I trained a model using rgb values and the train folders train_A and train_B without any issues.
Now I wanted to use the nyuv2 dataset using the labels as input and rgb as outputs.
I extracted the label images with the class labels in each pixel for a total of 985 classes including 0 as a no class.
I set up the images under `datasets/nyuv2/train_img/0.png` and `datasets/nyuv2/train_label/0.png`
However, I'm stuck trying to train as I keep getting an error when trying to one hot word encode. I run the following command:
`CUDA_LAUNCH_BLOCKING=1 python train.py --label_nc 895 --name nyuv2 --dataroot ./datasets/nyuv2 --save_epoch_freq 5 --loadSize 640 --instance_feat --netG global`
And the error:
`/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [22,0,0], thread: [127,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
THCudaCheck FAIL file=/pytorch/aten/src/THC/generic/THCTensorScatterGather.cu line=380 error=59 : device-side assert triggered
Traceback (most recent call last):
File "train.py", line 71, in <module>
Variable(data['image']), Variable(data['feat']), infer=save_fake)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 150, in forward
return self.module(*inputs[0], **kwargs[0])
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "/home/juancevedo/pix2pixHD/models/pix2pixHD_model.py", line 158, in forward
input_label, inst_map, real_image, feat_map = self.encode_input(label, inst, image, feat)
File "/home/juancevedo/pix2pixHD/models/pix2pixHD_model.py", line 122, in encode_input
input_label = input_label.scatter_(1, label_map.data.long().cuda(), 1.0)
RuntimeError: cuda runtime error (59) : device-side assert triggered at /pytorch/aten/src/THC/generic/THCTensorScatterGather.cu:380`
Here are two sample images:
Label image:

Train image:

I suspect something changed with new pytorch but I don't know how to make the fix.
Any help is appreciated. Thank you.
|
open
|
2020-07-01T21:55:43Z
|
2020-07-26T19:35:14Z
|
https://github.com/NVIDIA/pix2pixHD/issues/206
|
[] |
entrpn
| 4
|
waditu/tushare
|
pandas
| 1,515
|
hk_hold 接口调用频次有问题
|
120积分报下面错误,加分到620分还是报下面错误。而且我的code里面已经等了32秒!
198 for day in days:
199 print(day)
200 df = pro.hk_hold(trade_date = day.replace('-',''))
201 if not df.empty:
202 df.to_csv(filename, mode='a', index=False, header=h_flag)
203
204 if h_flag:
205 h_flag = False
206 if day != days[-1]:
207 time.sleep(32) # can only get 2 times per minutes
208
Exception: 抱歉,您每分钟最多访问该接口2次,权限的具体详情访问:https://tushare.pro/document/1?doc_id=108。
|
open
|
2021-02-17T14:59:40Z
|
2021-02-17T15:00:29Z
|
https://github.com/waditu/tushare/issues/1515
|
[] |
sq2309
| 1
|
adamerose/PandasGUI
|
pandas
| 114
|
Scatter size should handle null values
|
Using a column with null values (ie. demo data `penguins.body_mass`) on size in a scatter plot produces this error:
```
ValueError:
Invalid element(s) received for the 'size' property of scatter.marker
Invalid elements include: [nan]
The 'size' property is a number and may be specified as:
- An int or float in the interval [0, inf]
- A tuple, list, or one-dimensional numpy array of the above
```
To provide a more seamless experience, it should instead zero out the null values before calling `px.scatter` if a size column is present (or maybe add a filter ` `column name` > 0` ?)
As a further thought, should any exception on hitting `finish` produce a popup?
|
closed
|
2021-03-03T13:39:16Z
|
2021-06-27T08:43:49Z
|
https://github.com/adamerose/PandasGUI/issues/114
|
[
"enhancement"
] |
fdion
| 4
|
hzwer/ECCV2022-RIFE
|
computer-vision
| 381
|
使用X-TRAIN数据集训练RIFE
|
作者您好,请问您尝试过使用高分辨率数据集(比如[X-TRAIN](https://github.com/JihyongOh/XVFI))训练RIFE吗?我在训练过程中遇到了一些问题。
实验设置如下:
1. 构造三帧组。X-TRAIN数据集中一个视频有65帧(索引为0到64),可以构造不同时间间隔的三帧组:0,1,2; 0,2,4; 0,3,6; 0,4,8; 0,5,10; ...; 0,32,64; 使用训练集中4408个视频共构造了400多万个三帧组。中间帧为ground truth. 训练时每个epoch会从这400多万个三帧组中随机选择48768个三帧组。
2. 数据预处理。随机裁剪至512x512.其他的数据增强方式和RIFE保持一致。
3. 使用4个GPU。学习率,batch size等参数和RIFE保持一致。
4. 没有加载RIFE的预训练模型。
首先是在训练大概1400 step后loss变为NAN:
<img width="1092" alt="image" src="https://github.com/user-attachments/assets/ba557e7e-b9a1-4044-be86-e226d2e7e192">
我尝试将weight decay从1e-3增大到2e-3(没有修改学习率等其他参数),在训练大概5000 step后loss变为NaN:
<img width="1082" alt="image" src="https://github.com/user-attachments/assets/cf8ec78b-986a-4a4c-a889-9bc1992ad91f">
尝试在IFNet中添加BN层(没有修改学习率等其他参数),训练大概40k step后loss猛增:
<img width="1077" alt="image" src="https://github.com/user-attachments/assets/9f951c81-e7d2-4373-a96b-dd50be475a0c">
会不会是训练集的问题,三帧组包含多种时间间隔(比如0,1,2;0,32,64)。我现在在尝试将训练集换成时间间隔相等的三帧组(0,26,52;1,27,53;...;12,38,64)。请问作者有什么建议吗?
|
closed
|
2024-11-14T02:23:12Z
|
2024-12-31T09:37:27Z
|
https://github.com/hzwer/ECCV2022-RIFE/issues/381
|
[] |
ZXMMD
| 3
|
indico/indico
|
flask
| 6,452
|
Group members do not get notification on abstract comment/review
|
Indico 3.3.2
https://github.com/indico/indico/blob/e99185c27a0e1081d1f494ea706b8770d9a132c8/indico/modules/events/abstracts/controllers/reviewing.py#L188-L209
(Local) group members do not seem to be included in the recipient list. The group has permission *manage*'
|
open
|
2024-07-23T11:50:58Z
|
2024-10-30T10:55:46Z
|
https://github.com/indico/indico/issues/6452
|
[] |
paulmenzel
| 1
|
deeppavlov/DeepPavlov
|
nlp
| 1,467
|
Windows support for DeepPavlov v0.14.1 and v0.15.0
|
**DeepPavlov version** : 0.14.1 and 0.15.0
**Python version**: 3.7
**Operating system** (ubuntu linux, windows, ...): Windows 10
**Issue**:
Attempting to upgrade to v0.14.1 or v0.15.0 encounters an error on Windows as uvloop is not supported on Windows. Is this a noted issue or are there any workarounds?
See installation error traceback (file paths removed):
**Error (including full traceback)**:
```
(venv) C:\...>pip install --upgrade deeppavlov==0.14.1
Collecting deeppavlov==0.14.1
Using cached deeppavlov-0.14.1-py3-none-any.whl (988 kB)
Requirement already satisfied: numpy==1.18.0 in c:\... (from deeppavlov==0.14.1) (1.18.0)
Requirement already satisfied: overrides==2.7.0 in c:\... (from deeppavlov==0.14.1) (2.7.0)
Requirement already satisfied: h5py==2.10.0 in c:\... (from deeppavlov==0.14.1) (2.10.0)
Requirement already satisfied: click==7.1.2 in c:\... (from deeppavlov==0.14.1) (7.1.2)
Requirement already satisfied: rusenttokenize==0.0.5 in c:\... (from deeppavlov==0.14.1) (0.0.5)
Requirement already satisfied: pytelegrambotapi==3.6.7 in c:\... (from deeppavlov==0.14.1) (3.6.7)
Requirement already satisfied: pandas==0.25.3 in c:\... (from deeppavlov==0.14.1) (0.25.3)
Requirement already satisfied: scikit-learn==0.21.2 in c:\... (from deeppavlov==0.14.1) (0.21.2)
Requirement already satisfied: aio-pika==6.4.1 in c:\... (from deeppavlov==0.14.1) (6.4.1)
Requirement already satisfied: pytz==2019.1 in c:\... (from deeppavlov==0.14.1) (2019.1)
Requirement already satisfied: Cython==0.29.14 in c:\... (from deeppavlov==0.14.1) (0.29.14)
Requirement already satisfied: scipy==1.4.1 in c:\... (from deeppavlov==0.14.1) (1.4.1)
Requirement already satisfied: pymorphy2==0.8 in c:\... (from deeppavlov==0.14.1) (0.8)
Requirement already satisfied: sacremoses==0.0.35 in c:\... (from deeppavlov==0.14.1) (0.0.35)
Requirement already satisfied: pyopenssl==19.1.0 in c:\... (from deeppavlov==0.14.1) (19.1.0)
Requirement already satisfied: pymorphy2-dicts-ru in c:\... (from deeppavlov==0.14.1) (2.4.417127.4579844)
Requirement already satisfied: prometheus-client==0.7.1 in c:\... (fromdeeppavlov==0.14.1) (0.7.1)
Requirement already satisfied: ruamel.yaml==0.15.100 in c:\... (from deeppavlov==0.14.1) (0.15.100)
Requirement already satisfied: filelock==3.0.12 in c:\... (from deeppavlov==0.14.1) (3.0.12)
Requirement already satisfied: fastapi==0.47.1 in c:\... (from deeppavlov==0.14.1) (0.47.1)
Requirement already satisfied: tqdm==4.41.1 in c:\... (from deeppavlov==0.14.1) (4.41.1)
Requirement already satisfied: requests==2.22.0 in c:\... (from deeppavlov==0.14.1) (2.22.0)
Requirement already satisfied: pydantic==1.3 in c:\... (from deeppavlov==0.14.1) (1.3)
Collecting uvloop==0.14.0
Using cached uvloop-0.14.0.tar.gz (2.0 MB)
ERROR: Command errored out with exit status 1:
command: 'c:\...\venv\scripts\python.exe' -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\...\\AppData\\Local\\Temp\\pip-install-_6m26rhl\\uvloop_f3b1771d349c4f5facada899f0c22cec\\setup.py'"'"'; __file__='"'"'C:\\...\\AppData\\Local\\Temp\\pip-install-_6m26rhl\\uvloop_f3b1771d349c4f5facada899f0c22cec\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.re
ad().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base 'C:\...\AppData\Local\Temp\pip-pip-egg-info-x2q39lc9'
cwd: C:\...\AppData\Local\Temp\pip-install-_6m26rhl\uvloop_f3b1771d349c4f5facada899f0c22cec\
Complete output (5 lines):
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\...\AppData\Local\Temp\pip-install-_6m26rhl\uvloop_f3b1771d349c4f5facada899f0c22cec\setup.py", line 15, in <module>
raise RuntimeError('uvloop does not support Windows at the moment')
RuntimeError: uvloop does not support Windows at the moment
----------------------------------------
WARNING: Discarding https://files.pythonhosted.org/packages/84/2e/462e7a25b787d2b40cf6c9864a9e702f358349fc9cfb77e83c38acb73048/uvloop-0.14.0.ta
r.gz#sha256=123ac9c0c7dd71464f58f1b4ee0bbd81285d96cdda8bc3519281b8973e3a461e (from https://pypi.org/simple/uvloop/). Command errored out with e
xit status 1: python setup.py egg_info Check the logs for full command output.
ERROR: Could not find a version that satisfies the requirement uvloop==0.14.0 (from deeppavlov) (from versions: 0.4.1, 0.4.2, 0.4.3, 0.4.4, 0.4
.5, 0.4.6, 0.4.7, 0.4.8, 0.4.9, 0.4.10, 0.4.11, 0.4.12, 0.4.13, 0.4.14, 0.4.15, 0.4.16, 0.4.17, 0.4.18, 0.4.19, 0.4.20, 0.4.21, 0.4.22, 0.4.23,
0.4.24, 0.4.25, 0.4.26, 0.4.27, 0.4.28, 0.4.29, 0.4.30, 0.4.31, 0.4.32, 0.4.33, 0.4.34, 0.5.0, 0.5.1, 0.5.2, 0.5.3, 0.5.4, 0.5.5, 0.6.0, 0.6.5
, 0.6.6, 0.6.7, 0.6.8, 0.7.0, 0.7.1, 0.7.2, 0.8.0, 0.8.1, 0.9.0, 0.9.1, 0.10.0, 0.10.1, 0.10.2, 0.10.3, 0.11.0, 0.11.1, 0.11.2, 0.11.3, 0.12.0r
c1, 0.12.0, 0.12.1, 0.12.2, 0.13.0rc1, 0.13.0, 0.14.0rc1, 0.14.0rc2, 0.14.0, 0.15.0, 0.15.1, 0.15.2, 0.15.3)
ERROR: No matching distribution found for uvloop==0.14.0
```
|
closed
|
2021-07-21T19:24:20Z
|
2022-11-09T00:15:14Z
|
https://github.com/deeppavlov/DeepPavlov/issues/1467
|
[
"bug"
] |
priyankshah7
| 6
|
Miserlou/Zappa
|
django
| 1,277
|
Not able to used "nested" project folter
|
## Context
When using Zappa, I'm not able to structure the problem as this:
```
/zappa_settings.json
/project/manage.py
/project/project/settings.py
```
Even using `"django_settings": "project.project.settings.py"` won't work. Instead, I'm forced to do this:
```
/zappa_settings.json
/manage.py
/project/settings.py
```
Using the `"django_settings": "project.settings.py"` configuration.
## Steps to Reproduce
1. Create a root folter
2. Create a Django Project
3. Runs `zappa init` to create a configuration file, it will point `django_settings` to `project.project.settings`.
4. Deploy the code.
5. When accessed, the log will show something like:
```
project.urls could not be imported
```
## Your Environment
* Zappa version used: 0.45.1
* Operating System and Python version: OSx 10.12.6 / Python 3.6
* The output of `pip freeze`:
```
argcomplete==1.9.2
base58==0.2.4
boto3==1.4.8
botocore==1.8.6
certifi==2017.11.5
cfn-flip==0.2.5
chardet==3.0.4
click==6.7
Django==1.11.7
docutils==0.14
durationpy==0.5
future==0.16.0
hjson==3.0.1
idna==2.6
jmespath==0.9.3
kappa==0.6.0
lambda-packages==0.19.0
placebo==0.8.1
python-dateutil==2.6.1
python-slugify==1.2.4
pytz==2017.3
PyYAML==3.12
requests==2.18.4
s3transfer==0.1.12
six==1.11.0
toml==0.9.3
tqdm==4.19.1
troposphere==2.1.1
Unidecode==0.4.21
urllib3==1.22
Werkzeug==0.12
wsgi-request-logger==0.4.6
zappa==0.45.1
```
* Your `zappa_settings.json`:
```
{
"dev": {
"aws_region": "us-west-1",
"django_settings": "project.project.settings",
"project_name": "project",
"runtime": "python3.6",
"s3_bucket": "zappa-nj48kxgbr"
}
}```
|
closed
|
2017-12-01T14:13:14Z
|
2018-03-31T16:45:43Z
|
https://github.com/Miserlou/Zappa/issues/1277
|
[] |
jonatasbaldin
| 4
|
redis/redis-om-python
|
pydantic
| 464
|
pydantic validators fail silently
|
This code works fine if the password conforms to the validator rules.
Especially during assignment of a parent JsonModel. However when I run a get() method to retrieve data I run into errors.
I have the follow EmbeddedJsonModel.
```
class identity(EmbeddedJsonModel):
'''
A Password|Token verification class
'''
password: str | None
auth_token: str | None
# session_token: str | None
@root_validator
def password_or_token(cls, values):
if values['password'] or values['auth_token']:
return values
raise ValueError("Password or Token must be provided")
@validator("password", pre=True)
def password_hasher(cls, password):
if password:
if len(password) >= 8 and len(password) <= 16:
if not (any(char in special_characters for char in password)
and any(char.isdigit() for char in password)):
raise ValueError("Password must contain at least 1 number and 1 special character")
else:
raise ValueError("Password must be between 8 and 16 characters")
return ph().hash(password)
```
`x = await RedisUser.get(pk)` This fails with the following error: KeyError: 'password'
This indicates that the Model completely rejected the key 'password' on getting it back. Since the value is hashed and doesn't necessarily pass the validation check anymore on retrieval from the DB.
If I comment out the `@root_validator` then my 2nd validator returns the correct error.
Similarly,
If I comment the 2nd validator `@validator("password")` then the program works. My root validator runs.
This problem may extend to the sync version as well.
This is what I sent to the DB:
`identity(pk='01GPZQCPEDZG928P7XCTHJYB36', password='<password hash that fails the validator check>', auth_token=None)`
This is what I got back:
`identity(pk='01GPZQCPEDZG928P7XCTHJYB36', auth_token=None)`
|
open
|
2023-01-17T12:01:19Z
|
2023-01-17T12:33:08Z
|
https://github.com/redis/redis-om-python/issues/464
|
[] |
XChikuX
| 1
|
HIT-SCIR/ltp
|
nlp
| 561
|
请问dep的解码过程用到的eisner是参考了哪篇论文?
|
closed
|
2022-03-28T10:57:39Z
|
2022-06-18T00:54:13Z
|
https://github.com/HIT-SCIR/ltp/issues/561
|
[] |
yhj997248885
| 1
|
|
sqlalchemy/alembic
|
sqlalchemy
| 584
|
migration message does not support non ascii characters
|
When i try to create a migration that contains a non ascii characters in the message, the migration fail.
```python
Traceback (most recent call last):
File "/home/lohanna/digesto/op/venv/local/lib/python2.7/site-packages/alembic/util/pyfiles.py", line 14, in template_to_file
output = template.render_unicode(**kw).encode(output_encoding)
File "/home/lohanna/digesto/op/venv/local/lib/python2.7/site-packages/mako/template.py", line 454, in render_unicode
as_unicode=True)
File "/home/lohanna/digesto/op/venv/local/lib/python2.7/site-packages/mako/runtime.py", line 829, in _render
**_kwargs_for_callable(callable_, data))
File "/home/lohanna/digesto/op/venv/local/lib/python2.7/site-packages/mako/runtime.py", line 864, in _render_context
_exec_template(inherit, lclcontext, args=args, kwargs=kwargs)
File "/home/lohanna/digesto/op/venv/local/lib/python2.7/site-packages/mako/runtime.py", line 890, in _exec_template
callable_(context, *args, **kwargs)
File "alembic/script.py.mako", line 1, in render_body
"""${message}
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 29: ordinal not in range(128)
```
|
closed
|
2019-07-03T16:38:27Z
|
2019-07-03T17:40:53Z
|
https://github.com/sqlalchemy/alembic/issues/584
|
[
"question"
] |
lohxx
| 3
|
awtkns/fastapi-crudrouter
|
fastapi
| 55
|
[FEAT] Ability to pass order_by column(s)
|
Right now you return the models ordered by id (or not really ordered didn't get into details of all backends, and without explicit order by the order in db is not guaranteed).
I would like to be able to pass a list of columns that should be used to, note that it should be treated as a default sorting, other functionality or params passed to query can change it later if you decide to support such function.
Given sample:
```python
class Flower(pydantic.BaseModel):
id: int
name: str
color: str
petals: int
created_date: datetime.date
updated_date: datetime.date
```
I want to be able to by default sort by last updated and then by name:
```python
router = MemoryCRUDRouter(
schema=User,
prefix="users",
exclude_in_list: ["-updated_date", "name"] # to sort descending pass - in front of name
)
```
This would cause the returned values should be sorted in a proper way.
Optionally this should also support #53 and passing like `"flower__garden__name"` would order by property name in model `Garden` related to model `Flower`.
|
open
|
2021-03-29T10:07:31Z
|
2021-03-30T05:20:07Z
|
https://github.com/awtkns/fastapi-crudrouter/issues/55
|
[] |
collerek
| 2
|
FactoryBoy/factory_boy
|
sqlalchemy
| 688
|
need revise docs Many-to-many for SQLAlchemy
|
#### The problem
Many-to-Many documentation is not work for SQLAlchemy.
This is described in #121
#### Proposed solution
[Simple Many-to-many relationship](https://factoryboy.readthedocs.io/en/latest/recipes.html#simple-many-to-many-relationship) in the docs should not use add but append.
#### Extra notes
None
|
closed
|
2020-01-24T16:40:01Z
|
2020-01-28T16:04:51Z
|
https://github.com/FactoryBoy/factory_boy/issues/688
|
[
"Doc",
"SQLAlchemy"
] |
kobayashi
| 3
|
Lightning-AI/pytorch-lightning
|
deep-learning
| 20,280
|
SLURM resubmission crashes because of multiprocessing error
|
### Bug description
Hello,
i am using `SLURMEnvironment` plugin to resubmit jobs automatically. So far it has been working seamlessly
on my academic cluster, but recently when the auto-requeue signal is sent, the python script fails because of some multiprocessing error.
It appears to me that workers in the dataloader are not shut down correctly.
Setting `num_workers=0` does not solve the issue, the same problem persists.
I couldn't really find anything online that addresses a similar issue, so I'd be glad to hear any tips on how to overcome this.
Thanks!
### What version are you seeing the problem on?
v2.4
### How to reproduce the bug
_No response_
### Error messages and logs
```
Epoch 1235: 100Handling auto-requeue signal: 1
Exception in thread Thread-3:
Traceback (most recent call last):
File "/path/to/env/envs/vhg-torch/lib/python3.9/threading.py", line 980, in _bootstrap_inner
self.run()
File "/path/to/env/envs/vhg-torch/lib/python3.9/threading.py", line 917, in run
self._target(*self._args, **self._kwargs)
File "/path/to/env/envs/vhg-torch/lib/python3.9/site-packages/torch/utils/data/_utils/pin_memory.py", line 54, in _pin_memory_loop
do_one_step()
File "/path/to/env/envs/vhg-torch/lib/python3.9/site-packages/torch/utils/data/_utils/pin_memory.py", line 31, in do_one_step
r = in_queue.get(timeout=MP_STATUS_CHECK_INTERVAL)
File "/path/to/env/envs/vhg-torch/lib/python3.9/multiprocessing/queues.py", line 122, in get
return _ForkingPickler.loads(res)
File "/path/to/env/envs/vhg-torch/lib/python3.9/site-packages/torch/multiprocessing/reductions.py", line 495, in rebuild_storage_fd
fd = df.detach()
File "/path/to/env/envs/vhg-torch/lib/python3.9/multiprocessing/resource_sharer.py", line 57, in detach
with _resource_sharer.get_connection(self._id) as conn:
File "/path/to/env/envs/vhg-torch/lib/python3.9/multiprocessing/resource_sharer.py", line 86, in get_connection
c = Client(address, authkey=process.current_process().authkey)
File "/path/to/env/envs/vhg-torch/lib/python3.9/multiprocessing/connection.py", line 502, in Client
c = SocketClient(address)
File "/path/to/env/envs/vhg-torch/lib/python3.9/multiprocessing/connection.py", line 630, in SocketClient
s.connect(address)
FileNotFoundError: [Errno 2] No such file or directory
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
File "/path/to/env/envs/vhg-torch/lib/python3.9/multiprocessing/util.py", line 300, in _run_finalizers
finalizer()
File "/path/to/env/envs/vhg-torch/lib/python3.9/multiprocessing/util.py", line 300, in _run_finalizers
finalizer()
File "/path/to/env/envs/vhg-torch/lib/python3.9/multiprocessing/util.py", line 300, in _run_finalizers
finalizer()
File "/path/to/env/envs/vhg-torch/lib/python3.9/multiprocessing/util.py", line 224, in __call__
res = self._callback(*self._args, **self._kwargs)
File "/path/to/env/envs/vhg-torch/lib/python3.9/multiprocessing/util.py", line 224, in __call__
res = self._callback(*self._args, **self._kwargs)
File "/path/to/env/envs/vhg-torch/lib/python3.9/multiprocessing/util.py", line 224, in __call__
res = self._callback(*self._args, **self._kwargs)
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/pymp-l44wc1y0/listener-ha7esg4_'
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/pymp-mou7am87/listener-k3a3wu_k'
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/pymp-x0mbsgom/listener-k8nrogyx'
...
Traceback (most recent call last):
File "/my/dir/train_twin.py", line 146, in <module>
main(args)
File "/my/dir/train_twin.py", line 140, in main
trainer.fit(lightning_model, train_dataloader, val_dataloader, ckpt_path="last")
File "/path/to/env/envs/vhg-torch/lib/python3.9/site-packages/lightning/pytorch/trainer/trainer.py", line 538, in fit
call._call_and_handle_interrupt(
File "/path/to/env/envs/vhg-torch/lib/python3.9/site-packages/lightning/pytorch/trainer/call.py", line 47, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/path/to/env/envs/vhg-torch/lib/python3.9/site-packages/lightning/pytorch/trainer/trainer.py", line 574, in _fit_impl
self._run(model, ckpt_path=ckpt_path)
File "/path/to/env/envs/vhg-torch/lib/python3.9/site-packages/lightning/pytorch/trainer/trainer.py", line 981, in _run
results = self._run_stage()
File "/path/to/env/envs/vhg-torch/lib/python3.9/site-packages/lightning/pytorch/trainer/trainer.py", line 1025, in _run_stage
self.fit_loop.run()
File "/path/to/env/envs/vhg-torch/lib/python3.9/site-packages/lightning/pytorch/loops/fit_loop.py", line 205, in run
self.advance()
File "/path/to/env/envs/vhg-torch/lib/python3.9/site-packages/lightning/pytorch/loops/fit_loop.py", line 363, in advance
self.epoch_loop.run(self._data_fetcher)
File "/path/to/env/envs/vhg-torch/lib/python3.9/site-packages/lightning/pytorch/loops/training_epoch_loop.py", line 140, in run
self.advance(data_fetcher)
File "/path/to/env/envs/vhg-torch/lib/python3.9/site-packages/lightning/pytorch/loops/training_epoch_loop.py", line 212, in advance
batch, _, __ = next(data_fetcher)
File "/path/to/env/envs/vhg-torch/lib/python3.9/site-packages/lightning/pytorch/loops/fetchers.py", line 133, in __next__
batch = super().__next__()
File "/path/to/env/envs/vhg-torch/lib/python3.9/site-packages/lightning/pytorch/loops/fetchers.py", line 60, in __next__
batch = next(self.iterator)
File "/path/to/env/envs/vhg-torch/lib/python3.9/site-packages/lightning/pytorch/utilities/combined_loader.py", line 341, in __next__
out = next(self._iterator)
File "/path/to/env/envs/vhg-torch/lib/python3.9/site-packages/lightning/pytorch/utilities/combined_loader.py", line 78, in __next__
out[i] = next(self.iterators[i])
File "/path/to/env/envs/vhg-torch/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 631, in __next__
data = self._next_data()
File "/path/to/env/envs/vhg-torch/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1329, in _next_data
idx, data = self._get_data()
File "/path/to/env/envs/vhg-torch/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1285, in _get_data
success, data = self._try_get_data()
File "/path/to/env/envs/vhg-torch/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1133, in _try_get_data
data = self._data_queue.get(timeout=timeout)
File "/path/to/env/envs/vhg-torch/lib/python3.9/queue.py", line 180, in get
self.not_empty.wait(remaining)
File "/path/to/env/envs/vhg-torch/lib/python3.9/threading.py", line 316, in wait
gotit = waiter.acquire(True, timeout)
File "/path/to/env/envs/vhg-torch/lib/python3.9/site-packages/lightning/pytorch/trainer/connectors/signal_connector.py", line 33, in __call__
signal_handler(signum, frame)
File "/path/to/env/envs/vhg-torch/lib/python3.9/site-packages/lightning/pytorch/trainer/connectors/signal_connector.py", line 75, in _slurm_sigusr_handler_fn
self.trainer.save_checkpoint(hpc_save_path)
File "/path/to/env/envs/vhg-torch/lib/python3.9/site-packages/lightning/pytorch/trainer/trainer.py", line 1365, in save_checkpoint
self.strategy.save_checkpoint(checkpoint, filepath, storage_options=storage_options)
File "/path/to/env/envs/vhg-torch/lib/python3.9/site-packages/lightning/pytorch/strategies/strategy.py", line 490, in save_checkpoint
self.checkpoint_io.save_checkpoint(checkpoint, filepath, storage_options=storage_options)
File "/path/to/env/envs/vhg-torch/lib/python3.9/site-packages/lightning/fabric/plugins/io/torch_io.py", line 58, in save_checkpoint
_atomic_save(checkpoint, path)
File "/path/to/env/envs/vhg-torch/lib/python3.9/site-packages/lightning/fabric/utilities/cloud_io.py", line 89, in _atomic_save
with fs.transaction, fs.open(urlpath, "wb") as f:
File "/path/to/env/envs/vhg-torch/lib/python3.9/site-packages/fsspec/spec.py", line 1293, in open
f = self._open(
File "/path/to/env/envs/vhg-torch/lib/python3.9/site-packages/fsspec/implementations/local.py", line 184, in _open
return LocalFileOpener(path, mode, fs=self, **kwargs)
File "/path/to/env/envs/vhg-torch/lib/python3.9/site-packages/fsspec/implementations/local.py", line 306, in __init__
self._open()
File "/path/to/env/envs/vhg-torch/lib/python3.9/site-packages/fsspec/implementations/local.py", line 317, in _open
i, name = tempfile.mkstemp()
File "/path/to/env/envs/vhg-torch/lib/python3.9/tempfile.py", line 352, in mkstemp
return _mkstemp_inner(dir, prefix, suffix, flags, output_type)
File "/path/to/env/envs/vhg-torch/lib/python3.9/tempfile.py", line 255, in _mkstemp_inner
fd = _os.open(file, flags, 0o600)
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmpuneur_2o'
```
### Environment
<details>
<summary>Current environment</summary>
```
* CUDA:
- GPU:
- NVIDIA A40
- available: True
- version: 12.1
* Lightning:
- lightning: 2.4.0
- lightning-utilities: 0.11.7
- pytorch-lightning: 2.4.0
- torch: 2.3.1+cu121
- torchaudio: 2.3.1+cu121
- torchmetrics: 1.4.1
- torchvision: 0.18.1+cu121
* Packages:
- absl-py: 2.1.0
- accelerate: 0.34.0
- addict: 2.4.0
- aiohappyeyeballs: 2.4.0
- aiohttp: 3.10.5
- aiosignal: 1.3.1
- antlr4-python3-runtime: 4.9.3
- anyio: 4.4.0
- argon2-cffi: 23.1.0
- argon2-cffi-bindings: 21.2.0
- arrow: 1.3.0
- asttokens: 2.4.1
- async-lru: 2.0.4
- async-timeout: 4.0.3
- attrs: 24.2.0
- autocommand: 2.2.2
- babel: 2.16.0
- backports.tarfile: 1.2.0
- beautifulsoup4: 4.12.3
- bleach: 6.1.0
- blinker: 1.8.2
- certifi: 2024.8.30
- cffi: 1.17.1
- charset-normalizer: 3.3.2
- click: 8.1.7
- comm: 0.2.2
- configargparse: 1.7
- contourpy: 1.3.0
- cycler: 0.12.1
- dash: 2.18.0
- dash-core-components: 2.0.0
- dash-html-components: 2.0.0
- dash-table: 5.0.0
- datasets: 2.21.0
- debugpy: 1.8.5
- decorator: 5.1.1
- defusedxml: 0.7.1
- diffusers: 0.30.2
- dill: 0.3.8
- docker-pycreds: 0.4.0
- einops: 0.8.0
- exceptiongroup: 1.2.2
- executing: 2.1.0
- fastjsonschema: 2.20.0
- filelock: 3.13.1
- flask: 3.0.3
- fonttools: 4.53.1
- fqdn: 1.5.1
- frozenlist: 1.4.1
- fsspec: 2024.2.0
- gitdb: 4.0.11
- gitpython: 3.1.43
- grpcio: 1.66.1
- gsplat: 1.3.0
- h11: 0.14.0
- h5py: 3.11.0
- httpcore: 1.0.5
- httpx: 0.27.2
- huggingface-hub: 0.24.6
- idna: 3.8
- importlib-metadata: 8.4.0
- importlib-resources: 6.4.4
- inflect: 7.3.1
- ipykernel: 6.29.5
- ipython: 8.18.1
- ipywidgets: 8.1.5
- isoduration: 20.11.0
- itsdangerous: 2.2.0
- jaraco.context: 5.3.0
- jaraco.functools: 4.0.1
- jaraco.text: 3.12.1
- jaxtyping: 0.2.34
- jedi: 0.19.1
- jinja2: 3.1.3
- joblib: 1.4.2
- json5: 0.9.25
- jsonpointer: 3.0.0
- jsonschema: 4.23.0
- jsonschema-specifications: 2023.12.1
- jupyter: 1.1.1
- jupyter-client: 8.6.2
- jupyter-console: 6.6.3
- jupyter-core: 5.7.2
- jupyter-events: 0.10.0
- jupyter-lsp: 2.2.5
- jupyter-server: 2.14.2
- jupyter-server-terminals: 0.5.3
- jupyterlab: 4.2.5
- jupyterlab-pygments: 0.3.0
- jupyterlab-server: 2.27.3
- jupyterlab-widgets: 3.0.13
- kiwisolver: 1.4.7
- lightning: 2.4.0
- lightning-utilities: 0.11.7
- markdown: 3.7
- markdown-it-py: 3.0.0
- markupsafe: 2.1.5
- matplotlib: 3.9.2
- matplotlib-inline: 0.1.7
- mdurl: 0.1.2
- mistune: 3.0.2
- more-itertools: 10.3.0
- mpmath: 1.3.0
- multidict: 6.0.5
- multiprocess: 0.70.16
- natsort: 8.4.0
- nbclient: 0.10.0
- nbconvert: 7.16.4
- nbformat: 5.10.4
- nest-asyncio: 1.6.0
- networkx: 3.2.1
- ninja: 1.11.1.1
- notebook: 7.2.2
- notebook-shim: 0.2.4
- numpy: 1.26.3
- nvdiffrast: 0.3.1
- nvidia-cublas-cu12: 12.1.3.1
- nvidia-cuda-cupti-cu12: 12.1.105
- nvidia-cuda-nvrtc-cu12: 12.1.105
- nvidia-cuda-runtime-cu12: 12.1.105
- nvidia-cudnn-cu12: 8.9.2.26
- nvidia-cufft-cu12: 11.0.2.54
- nvidia-curand-cu12: 10.3.2.106
- nvidia-cusolver-cu12: 11.4.5.107
- nvidia-cusparse-cu12: 12.1.0.106
- nvidia-nccl-cu12: 2.20.5
- nvidia-nvjitlink-cu12: 12.1.105
- nvidia-nvtx-cu12: 12.1.105
- omegaconf: 2.3.0
- open3d: 0.18.0
- opencv-python: 4.10.0.84
- overrides: 7.7.0
- packaging: 24.1
- pandas: 2.2.2
- pandocfilters: 1.5.1
- parso: 0.8.4
- peft: 0.12.0
- pexpect: 4.9.0
- pillow: 10.2.0
- pip: 24.2
- platformdirs: 4.2.2
- plotly: 5.24.0
- plyfile: 1.1
- prometheus-client: 0.20.0
- prompt-toolkit: 3.0.47
- protobuf: 3.20.3
- psutil: 6.0.0
- ptyprocess: 0.7.0
- pure-eval: 0.2.3
- pyarrow: 17.0.0
- pycparser: 2.22
- pygments: 2.18.0
- pyparsing: 3.1.4
- pyquaternion: 0.9.9
- python-dateutil: 2.9.0.post0
- python-json-logger: 2.0.7
- pytorch-lightning: 2.4.0
- pytz: 2024.1
- pyyaml: 6.0.2
- pyzmq: 26.2.0
- referencing: 0.35.1
- regex: 2024.7.24
- requests: 2.32.3
- retrying: 1.3.4
- rfc3339-validator: 0.1.4
- rfc3986-validator: 0.1.1
- rich: 13.8.0
- roma: 1.5.0
- rpds-py: 0.20.0
- safetensors: 0.4.4
- scikit-learn: 1.5.1
- scipy: 1.13.1
- send2trash: 1.8.3
- sentry-sdk: 2.13.0
- setproctitle: 1.3.3
- setuptools: 73.0.1
- six: 1.16.0
- smmap: 5.0.1
- sniffio: 1.3.1
- soupsieve: 2.6
- stack-data: 0.6.3
- sympy: 1.12
- tenacity: 9.0.0
- tensorboard: 2.17.1
- tensorboard-data-server: 0.7.2
- terminado: 0.18.1
- threadpoolctl: 3.5.0
- timm: 1.0.9
- tinycss2: 1.3.0
- tokenizers: 0.19.1
- tomli: 2.0.1
- torch: 2.3.1+cu121
- torchaudio: 2.3.1+cu121
- torchmetrics: 1.4.1
- torchvision: 0.18.1+cu121
- tornado: 6.4.1
- tqdm: 4.66.5
- traitlets: 5.14.3
- transformers: 4.44.2
- trimesh: 4.4.9
- triton: 2.3.1
- typeguard: 2.13.3
- types-python-dateutil: 2.9.0.20240821
- typing-extensions: 4.9.0
- tzdata: 2024.1
- uri-template: 1.3.0
- urllib3: 2.2.2
- virtualhumangen: 1.0.0
- wandb: 0.17.9
- wcwidth: 0.2.13
- webcolors: 24.8.0
- webencodings: 0.5.1
- websocket-client: 1.8.0
- werkzeug: 3.0.4
- wheel: 0.44.0
- widgetsnbextension: 4.0.13
- xxhash: 3.5.0
- yarl: 1.9.11
- zipp: 3.20.1
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor:
- python: 3.9.19
- release: 6.1.107.1.amd64-smp
- version: #1 SMP PREEMPT_DYNAMIC Mon Sep 2 09:32:21 CEST 2024
```
</details>
### More info
_No response_
|
open
|
2024-09-13T15:00:13Z
|
2024-11-09T18:34:42Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/20280
|
[
"bug",
"needs triage",
"ver: 2.4.x"
] |
antonzub99
| 2
|
allenai/allennlp
|
nlp
| 4,675
|
Have a single `TrainerCallback` that can handle both `BatchCallback` and `EpochCallback`.
|
Also, add another call at the end of the whole training run.
This should make it easier to hang on to state inside the callback.
See the discussion at the end of this issue: https://github.com/allenai/allennlp/pull/3970
|
closed
|
2020-09-25T23:00:34Z
|
2020-10-26T17:22:41Z
|
https://github.com/allenai/allennlp/issues/4675
|
[
"Contributions welcome",
"Feature request"
] |
dirkgr
| 7
|
simple-login/app
|
flask
| 1,483
|
Forbid to register with temporary adresses
|
Hello!
Please prohibit registering with a temporary email. There are a lot of people who abuse your service which causes their domains to get on spam lists.
Here is a list of all temporary emails. I hope it will be useful for you:
https://raw.githubusercontent.com/disposable-email-domains/disposable-email-domains/master/disposable_email_blocklist.conf
|
open
|
2022-12-09T16:37:11Z
|
2022-12-13T13:17:27Z
|
https://github.com/simple-login/app/issues/1483
|
[] |
ghost
| 1
|
Sanster/IOPaint
|
pytorch
| 370
|
[Feature Request] add image to prompt for anime or realistic image
|
i think Add this plugin can get better inpainting
|
closed
|
2023-09-08T10:48:24Z
|
2025-03-24T02:07:05Z
|
https://github.com/Sanster/IOPaint/issues/370
|
[
"stale"
] |
Kingal20000
| 2
|
Yorko/mlcourse.ai
|
data-science
| 378
|
Minor typo in the 3rd Assignment
|
> In our case there's only **_ine_** feature so ...
|
closed
|
2018-10-16T12:43:37Z
|
2018-10-17T21:34:04Z
|
https://github.com/Yorko/mlcourse.ai/issues/378
|
[
"minor_fix"
] |
foghegehog
| 3
|
opengeos/leafmap
|
streamlit
| 284
|
Better documentation and approach for rescale parameter for multi-band COGs
|
### Description
Two requests:
(1) multi-band COGs frequently need different rescaling for different bands. How to do this isn't documented in the API, but this
works because of how requests handles the params dictionary:
```
m.add_cog_layer(url, rescale=["164,223","130,211","99,212"])
```
so that `rescale=164,223&rescale=130,211r&rescale=99,212` is sent to the tiler endpoint.
Maybe this should be documented? (this works with titiler, though I don't know enough about the tiler api to know if it works on all tilers)
(2)
When `rescale` is not supplied as an argument, the rescaling is done across all bands:
```
if "rescale" not in kwargs:
stats = cog_stats(url, titiler_endpoint)
if "message" not in stats:
percentile_2 = min([stats[s]["percentile_2"] for s in stats])
percentile_98 = max([stats[s]["percentile_98"] for s in stats])
kwargs["rescale"] = f"{percentile_2},{percentile_98}"```
```
whereas this should (at least) only over the bands specified via `bands` or `bidx`. Even better would be to specify as above (`rescale=["164,223","130,211","99,212"]`) so that the rescaling is done per band.
|
closed
|
2022-09-08T20:10:51Z
|
2022-11-06T19:51:56Z
|
https://github.com/opengeos/leafmap/issues/284
|
[
"Feature Request"
] |
philvarner
| 2
|
Farama-Foundation/Gymnasium
|
api
| 717
|
[Question] AssertionError when combining discrete and continious action space
|
### Question
Hi all,
I have a gymnasium environment with a combiation of a discrete and a continious action space. So actually there are 3 actions to take. The first 2 ones are discrete in the range between 0 and `timeslots_for_state_load_percentages_costs`. The third one is a continious action between 0 and 50.
The observation space is continious with the dimensionality `2 * timeslots_for_state_load_percentages_costs`. Each value can be between 0 and 1.
Here is the code of a (not really meaningful) test example:
```
import gymnasium as gym
from gymnasium import Env
from gymnasium.spaces import Discrete, Box, Tuple, MultiDiscrete, space
import numpy as np
import os
timeslots_for_state_load_percentages_costs = 4
class DSM_Env_RL2(Env):
def __init__(self):
int_action_space = Discrete(timeslots_for_state_load_percentages_costs)
cont_action_space = Box(low=0, high=50, shape=(1,), dtype=np.float32)
# Combined action space
self.action_space = Tuple((int_action_space, int_action_space, cont_action_space))
#Specify observation space
low = np.zeros(2 * timeslots_for_state_load_percentages_costs)
high = np.ones(2 * timeslots_for_state_load_percentages_costs)
# Create the observation space
observation_space = gym.spaces.Box(low=low, high=high, dtype=np.float64)
self.observation_space = observation_space
def reset (self, **kwargs):
super().reset(**kwargs)
info = {}
obs = np.zeros(timeslots_for_state_load_percentages_costs)
return obs, info
def render (self):
pass
def step(self, action ):
# Execute the action in the external simulation and return the next observation, reward, done, and info
action_from_timeslot = action[0]
action_to_timeslot = action[1]
action_shifting_percentage = action[2]
#External environment is not used in this test example
#result_costs, result_peak, result_DC, results_dict, percentage_array_loads_per_timeslot_highest_prices_shortened, percentage_array_loads_per_timeslot_lowest_prices_shortened = Run_Simulations_Help.execute_single_modification_operator_decision_RL2(current_solution, action_from_timeslot, action_to_timeslot, action_shifting_percentage, self.read_RL_data_day, timeslots_for_state_load_percentages_costs )
percentage_array_loads_per_timeslot_highest_prices_shortened =np.zeros(timeslots_for_state_load_percentages_costs)
percentage_array_loads_per_timeslot_lowest_prices_shortened =np.zeros(timeslots_for_state_load_percentages_costs)
#calculate state
state_array = np.concatenate((percentage_array_loads_per_timeslot_highest_prices_shortened, percentage_array_loads_per_timeslot_lowest_prices_shortened))
observation_space= state_array
reward = 1
done = False
info = {}
print("")
return observation_space, reward, done, False, info
#Use Stable Baselines 3 to apply a RL algorithm on the environmetn
from stable_baselines3 import A2C
gym.register("dsm-env-v1", lambda: DSM_Env_RL2())
env = gym.make("dsm-env-v1")
#Ceck environment
check_environment = False
if check_environment == True:
from gymnasium.utils.env_checker import check_env
check_env(env.unwrapped)
from stable_baselines3.common.env_checker import check_env
check_env(env)
#Create the files of the model
models_dir = r"C:\Users\wi9632\Desktop\Ergebnisse\DSM\RL\RL_Models\A2C"
logdir = r"C:\Users\wi9632\Desktop\Ergebnisse\DSM\RL\RL_Logs\A2C"
if not os.path.exists(models_dir):
os.makedirs(models_dir)
if not os.path.exists(logdir):
os.makedirs(logdir)
#Define the model
model = A2C('MlpPolicy', env, verbose=1)
#train and save the model
model.learn(total_timesteps=1000)
model.save(os.path.join(models_dir, 'trained_a2c_model'))
```
When running it I get an assertion error from the gynasium environment checker with the message:
`AssertionError: The algorithm only supports (<class 'gymnasium.spaces.box.Box'>, <class 'gymnasium.spaces.discrete.Discrete'>, <class 'gymnasium.spaces.multi_discrete.MultiDiscrete'>, <class 'gymnasium.spaces.multi_binary.MultiBinary'>) as action spaces but Tuple(Discrete(4), Discrete(4), Box(0.0, 50.0, (1,), float32)) was provided`
I have problems understanding the error message. As far as I see it I define the correct action space by using ` int_action_space = Discrete(timeslots_for_state_load_percentages_costs)
cont_action_space = Box(low=0, high=50, shape=(1,), dtype=np.float32)
# Combined action space
self.action_space = Tuple((int_action_space, int_action_space, cont_action_space))`
and then in the step function I get the different values of the action space using:
```
action_from_timeslot = action[0]
action_to_timeslot = action[1]
action_shifting_percentage = action[2]
```
Does anyone know why I get this error and how to solve this issue?
|
closed
|
2023-09-19T16:36:43Z
|
2023-09-19T18:54:36Z
|
https://github.com/Farama-Foundation/Gymnasium/issues/717
|
[
"question"
] |
PBerit
| 1
|
inducer/pudb
|
pytest
| 104
|
bpython integration does not work
|
``` pytb
File "/Users/aaronmeurer/anaconda/lib/python3.3/bdb.py", line 47, in trace_dispatch
return self.dispatch_line(frame)
File "/Users/aaronmeurer/anaconda/lib/python3.3/bdb.py", line 65, in dispatch_line
self.user_line(frame)
File "/Users/aaronmeurer/anaconda/lib/python3.3/site-packages/pudb/debugger.py", line 322, in user_line
self.interaction(frame)
File "/Users/aaronmeurer/anaconda/lib/python3.3/site-packages/pudb/debugger.py", line 290, in interaction
show_exc_dialog=show_exc_dialog)
File "/Users/aaronmeurer/anaconda/lib/python3.3/site-packages/pudb/debugger.py", line 1836, in call_with_ui
return f(*args, **kwargs)
File "/Users/aaronmeurer/anaconda/lib/python3.3/site-packages/pudb/debugger.py", line 2014, in interaction
self.event_loop()
File "/Users/aaronmeurer/anaconda/lib/python3.3/site-packages/pudb/debugger.py", line 1980, in event_loop
toplevel.keypress(self.size, k)
File "/Users/aaronmeurer/anaconda/lib/python3.3/site-packages/pudb/ui_tools.py", line 88, in keypress
return handler(self, size, key)
File "/Users/aaronmeurer/anaconda/lib/python3.3/site-packages/pudb/debugger.py", line 1583, in run_cmdline
return run_external_cmdline(w, size, key)
File "/Users/aaronmeurer/anaconda/lib/python3.3/site-packages/pudb/debugger.py", line 1573, in run_external_cmdline
first_cmdline_run)
File "/Users/aaronmeurer/anaconda/lib/python3.3/site-packages/pudb/shell.py", line 91, in run_bpython_shell
bpython.cli.main(locals_=ns)
File "/Users/aaronmeurer/anaconda/lib/python3.3/site-packages/bpython/cli.py", line 1918, in main
banner=banner)
File "/Users/aaronmeurer/anaconda/lib/python3.3/site-packages/bpython/cli.py", line 1815, in curses_wrapper
return func(stdscr, *args, **kwargs)
File "/Users/aaronmeurer/anaconda/lib/python3.3/site-packages/bpython/cli.py", line 1877, in main_curses
bpython.args.exec_code(interpreter, args)
File "/Users/aaronmeurer/anaconda/lib/python3.3/site-packages/bpython/args.py", line 106, in exec_code
with open(args[0], 'r') as sourcefile:
FileNotFoundError: [Errno 2] No such file or directory: 'install'
```
|
closed
|
2014-02-04T17:16:14Z
|
2016-06-24T22:38:23Z
|
https://github.com/inducer/pudb/issues/104
|
[] |
asmeurer
| 8
|
JaidedAI/EasyOCR
|
deep-learning
| 393
|
TypeError: super(type, obj): obj must be an instance or subtype of type
|
While initializing model according to README, I've got the following error:

|
closed
|
2021-03-12T06:31:02Z
|
2022-03-02T09:24:34Z
|
https://github.com/JaidedAI/EasyOCR/issues/393
|
[] |
mateuszwosinski
| 1
|
gunthercox/ChatterBot
|
machine-learning
| 1,758
|
How to save train data
|
how to avoid train every time or how to save trained data
|
closed
|
2019-06-18T11:47:47Z
|
2020-08-29T20:31:24Z
|
https://github.com/gunthercox/ChatterBot/issues/1758
|
[] |
nikuraj006
| 1
|
alpacahq/alpaca-trade-api-python
|
rest-api
| 604
|
[Bug]: ModuleNotFoundError: No module named 'alpaca_trade_api.stream'
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Trying to stream data but I keep getting the error
ModuleNotFoundError: No module named 'alpaca_trade_api.stream'
I’ve tried pip3 install alpaca_trade_api from here: https://stackoverflow.com/questions/62745845/modulenotfounderror-no-module-named-alpaca-trade-api
Currently Anaconda is using version
alpaca-backtrader-api 0.13.1
Any suggestions?
### Expected Behavior
python3 main.py should work
### Steps To Reproduce
_No response_
### Anything else?
_No response_
|
closed
|
2022-04-13T18:20:18Z
|
2022-04-14T13:52:02Z
|
https://github.com/alpacahq/alpaca-trade-api-python/issues/604
|
[
"question"
] |
EM5813
| 2
|
FactoryBoy/factory_boy
|
django
| 403
|
How to use RelatedFactory for a model without related_name?
|
```
class User(Model):
name = CharField(max_length=100)
class Order(Model):
user = ForeignKey(User)
```
I access the list of orders for a user using as `user.order_set.all()` which is working for me. Ref: https://stackoverflow.com/a/42080970/1080135
But when I try to use the RelatedFactory as follows, I get an error saying that User does not have the attribute "order_set"
```
class UserFactory(DjangoModelFactory):
class Meta:
model = User
order_set = RelatedFactory(Order, 'user')
```
|
closed
|
2017-08-07T11:16:33Z
|
2017-08-07T14:22:08Z
|
https://github.com/FactoryBoy/factory_boy/issues/403
|
[
"Q&A",
"NeedInfo"
] |
pavan-blackbuck
| 2
|
kennethreitz/responder
|
graphql
| 99
|
On modularity & reusable ASGI components
|
So I've been following a general approach with the design of [Starlette](https://github.com/encode/starlette), that I'd love to see Responder follow on with. The aims are to keep complexity nicely bounded into smallish components, that then end up being reusable across different ASGI frameworks. Some examples:
* Starlette's router implementation is, itself, just a plain old ASGI app - you can run it directly, or plug in the test client to it, or submount it or whatever. There's a little light syntactic sugar if you're adding routes on an application instance, but all the complexity is neatly partitioned.
* Similarly Starlette's [GraphQL implementation](https://www.starlette.io/graphql/) is another plain old ASGI app. You can use it in any ASGI framework, or just use it directly without the application instance. Responder could use it if it wanted. (It deals with either running in a threadpool, or using an ASync executor, which is important.)
* Starlette's class based views. Again, they're [just tiny ASGI apps](https://github.com/encode/starlette/blob/master/starlette/endpoints.py#L12-L40) - it's nice because it keeps the complexity nicely bounded away from the application instance, and again, you can use them with any ASGI framework, or use them directly. Being able to plug the test client directly in at that level of interface means you end up with really simple test cases for testing the core functionality.
So, none of this means that you can't *at the top level* have an application interface that exposes a single point of configuration, but under the hood it all ends up just being composition of small nicely bounded components and middleware.
Places where this could be applied to responder:
* Pull the GraphQL implementation out of the app class itself, and have an standalone ASGI implementation, that the app instance uses composition to add in.
* Design the Router so that it's exposing an ASGI interface. Have router instances that expose the same routing API as the top-level application class, but don't come with any of the other stuff that the application gives you. That'd then be independently usable/testable. You can do neat things like mount a router instance under another app, or wrap it up in middleware, etc...
* Similarly with the class based views. Have the `on_request(req/resp)` interface stuff be implemented as a tiny ASGI app. It's independently usable/testable, you can apply middleware to individual views, you can reuse it in other frameworks, or you can use it as the basis for writing alternate class based view implementations (without the implementation all being tightly bound to the rest of the application class)
|
closed
|
2018-10-18T19:49:12Z
|
2018-10-19T14:58:59Z
|
https://github.com/kennethreitz/responder/issues/99
|
[
"feature"
] |
tomchristie
| 2
|
quantumlib/Cirq
|
api
| 6,994
|
Gauge compiling as a sweep: merge single qubit gates
|
**Problem Statement**
as_sweep() might transform a circuit with consecutive parameterized phxz operations.
Usually, after gauge_compiling, users would optimize the circuit by merging single qubit gates. While merging parameterized circuit isn't supported.
**Potential Solutions**
Support this in as_sweep() or merging the qubits in cirq.merge_single_qubit_gates_to_phxz.
|
open
|
2025-01-28T22:38:54Z
|
2025-03-13T22:39:52Z
|
https://github.com/quantumlib/Cirq/issues/6994
|
[
"kind/feature-request",
"triage/accepted",
"area/error-mitigation"
] |
babacry
| 3
|
scrapy/scrapy
|
python
| 6,643
|
Add support for async HTTP cache storages
|
https://stackoverflow.com/questions/79396472/how-to-extend-scrapy-with-custom-http-cache-which-needs-to-perform-asynchronous
It should be relatively easy to make it possible to have `HTTPCACHE_STORAGE` storages whose methods are asynchronous, because they are used only in `scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware` which can be changed to have asynchonous methods. There is even an old PR that does this, #2799, though nowadays we maybe want `async def` methods in the async storage interface instead of returning Deferreds.
|
open
|
2025-01-31T15:03:52Z
|
2025-02-11T20:18:29Z
|
https://github.com/scrapy/scrapy/issues/6643
|
[
"enhancement",
"asyncio"
] |
wRAR
| 0
|
satwikkansal/wtfpython
|
python
| 303
|
Add notes for modifying class attributes
|
Relates to [class attributes and instance attributes](https://github.com/satwikkansal/wtfpython#-class-attributes-and-instance-attributes)
considering the following code:
```python
class Foo:
_count1 = 0
_count2 = 0
@staticmethod
def total():Foo._count2 += 1
@classmethod
def count(cls):cls._count1 += 1
def __init__(self) -> None:
self.total()
self.count()
class Foo1(Foo): pass
class Foo2(Foo): pass
a,b,c = Foo1(),Foo1(),Foo2()
print(Foo._count1, Foo._count2, Foo1._count1, Foo1._count2, Foo2._count1, Foo2._count2)
# 0 3 2 3 1 3
```
The `@classmethod` way counts instance for each class separately, while the `@staticmethod` way counts for all subclasses together.
While this is easy to understand with the knowledge of how attributes works, users may not realize this until they run into the problem. Thus, I think it's worth noting so that users can learn to choose the correct way of setting attributes according to their actual needs.
|
open
|
2022-12-31T03:38:32Z
|
2023-01-05T14:38:26Z
|
https://github.com/satwikkansal/wtfpython/issues/303
|
[] |
ZeroRin
| 2
|
litestar-org/litestar
|
pydantic
| 3,133
|
Enhancement: Support Websockets on the async tests client
|
### Summary
Currently `websocket_connect` method is only available on the sync client.
Would be nice to support web sockets with the async client as well.
### Basic Example
```py
test_client = AsyncTestClient(get_application())
ws_session = test_client.websocket_connect()
```
### Drawbacks and Impact
_No response_
### Unresolved questions
_No response_
|
closed
|
2024-02-25T09:23:33Z
|
2025-03-20T15:54:26Z
|
https://github.com/litestar-org/litestar/issues/3133
|
[
"Enhancement"
] |
nrbnlulu
| 1
|
newpanjing/simpleui
|
django
| 170
|
登录后打开其他菜单,点击菜单中的首页,出现两个首页标签
|
**bug描述**
简单的描述下遇到的bug:
登录后打开其他菜单,点击菜单中的首页,出现两个首页标签
**重现步骤**
1.登录后,显示首页,地址栏连接没有#后缀;
2.点击菜单中其他任意项,打开新标签;
3.点击菜单中的首页,新增加了一个首页标签(总的显示两个),并且两个标签都是active状态;
**环境**
1.操作系统:MacOS 10.14.6
2.python版本:3.7.4
3.django版本:2.2.6
4.simpleui版本:3.2
**其他描述**
|
closed
|
2019-10-18T08:48:32Z
|
2019-11-18T05:23:44Z
|
https://github.com/newpanjing/simpleui/issues/170
|
[
"bug"
] |
eshxcmhk
| 0
|
qwj/python-proxy
|
asyncio
| 133
|
How to add username and pass --auth
|
what is the format of --auth , i tried but is not working
|
open
|
2021-08-30T16:47:07Z
|
2021-09-08T19:02:14Z
|
https://github.com/qwj/python-proxy/issues/133
|
[] |
apioff
| 1
|
modelscope/modelscope
|
nlp
| 812
|
Modelscope导致datasets库无法正常载入数据集
|
Thanks for your error report and we appreciate it a lot.
**Checklist**
* I have searched the tutorial on modelscope [doc-site](https://modelscope.cn/docs)
* I have searched related issues but cannot get the expected help.
* The bug has not been fixed in the latest version.
**Describe the bug**
modelscope 1.13.2版本中,替换了datasets库中的DownloadManager._download方法,导致部分datasets数据集载入失效。
**To Reproduce**
我使用swift框架进行训练,使用自定义代码的方式添加了自定义数据集。
本地下载[M3IT](https://huggingface.co/datasets/MMInstruction/M3IT)数据集,使用如下代码载入数据集:
```python
ds = datasets.load_dataset(f"{_M3IT_DIR}/M3IT_IMG.py", name='coco', trust_remote_code=True)
```
在1.13.2版本中,会报错:
```
FileNotFoundError: Local file /*****/m3it_qwenSource=SDK&Revision=master&FilePath=.%2Fdata%2Fcaptioning%2Fcoco%2Ftrain.jsonl doesn't exist
```
观察stack是由`modelscope/msdatasets/utils/hf_datasets_util.py:91`导致的。版本退回到1.13.1版本则无此问题。
能不能不要搞这种侵入式改动。看了眼changelog还标了breaking change,确实很breaking了
**Your Environments (__required__)**
* modelscope: 1.13.2/1.13.1
* datasets: 2.18.0
* ms-swift: 1.7.3
Please @ corresponding people according to your problem:
Dataset releated: @wangxingjun778
|
closed
|
2024-03-28T07:26:37Z
|
2024-05-23T01:55:58Z
|
https://github.com/modelscope/modelscope/issues/812
|
[
"Stale"
] |
zodiacg
| 5
|
babysor/MockingBird
|
pytorch
| 65
|
M1芯片下安装包报错的暂时性解决方案
|
本人设备m1 macbookair python3.9
在使用pip安装包时疯狂报错,通过其他方法安装完运行时报错的部分信息有“have:arm need:x86_64"。怀疑与arm兼容性有关。
解决方法:
1.安装x86版本的包使用rosetta2运行
在所需安装包命令前加上arch -x86_64
如arch -x86_64 pip install -r requirements.txt
最后启动时使用arch -x86_64 python demo_toolbox.py
或
2.安装windows虚拟机
~-~pd17真香
|
open
|
2021-08-30T15:43:50Z
|
2022-04-05T10:02:54Z
|
https://github.com/babysor/MockingBird/issues/65
|
[
"documentation"
] |
msly5
| 5
|
tflearn/tflearn
|
tensorflow
| 611
|
loading saved model "NotFoundError"
|
I can save the model by: model.save("test.tfl")
INFO:tensorflow:./test.tfl is not in all_model_checkpoint_paths. Manually adding it.
Files appear under the same folder:
model.tfl.ckpt-500.data-00000-of-00001
model.tfl.ckpt-500.index
model.tfl.ckpt-500.meta
model.tfl.ckpt-860.data-00000-of-00001
model.tfl.ckpt-860.index
model.tfl.ckpt-860.meta
I can load with: model.load("./test.tfl"), but not model.load("test.tfl")
However, if I have pretrained model files in the same folder ("ttt.tfl"), then even if I use model.load("./ttt.tfl"), error occurs: (the files are there)
WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tflearn/helpers/trainer.py:378 in restore.: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.global_variables_initializer` instead.
---------------------------------------------------------------------------
NotFoundError Traceback (most recent call last)
<ipython-input-39-e7123c38a977> in <module>()
----> 1 model.load("./ttt.tfl")
/usr/local/lib/python2.7/dist-packages/tflearn/models/dnn.pyc in load(self, model_file)
225
226 """
--> 227 self.trainer.restore(model_file)
228 self.session = self.trainer.session
229 self.predictor = Evaluator([self.net],
/usr/local/lib/python2.7/dist-packages/tflearn/helpers/trainer.pyc in restore(self, model_file)
377 self.session = tf.Session()
378 self.session.run(tf.initialize_all_variables())
--> 379 self.restorer.restore(self.session, model_file)
380 for o in self.train_ops:
381 o.session = self.session
/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.pyc in restore(self, sess, save_path)
1386 return
1387 sess.run(self.saver_def.restore_op_name,
-> 1388 {self.saver_def.filename_tensor_name: save_path})
1389
1390 @staticmethod
/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.pyc in run(self, fetches, feed_dict, options, run_metadata)
764 try:
765 result = self._run(None, fetches, feed_dict, options_ptr,
--> 766 run_metadata_ptr)
767 if run_metadata:
768 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.pyc in _run(self, handle, fetches, feed_dict, options, run_metadata)
962 if final_fetches or final_targets:
963 results = self._do_run(handle, final_targets, final_fetches,
--> 964 feed_dict_string, options, run_metadata)
965 else:
966 results = []
/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.pyc in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
1012 if handle is None:
1013 return self._do_call(_run_fn, self._session, feed_dict, fetch_list,
-> 1014 target_list, options, run_metadata)
1015 else:
1016 return self._do_call(_prun_fn, self._session, handle, feed_dict,
/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.pyc in _do_call(self, fn, *args)
1032 except KeyError:
1033 pass
-> 1034 raise type(e)(node_def, op, message)
1035
1036 def _extend_graph(self):
NotFoundError: Key FullyConnected_2/b not found in checkpoint
[[Node: save_1/RestoreV2_46 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save_1/Const_0, save_1/RestoreV2_46/tensor_names, save_1/RestoreV2_46/shape_and_slices)]]
|
closed
|
2017-02-18T14:28:59Z
|
2017-02-18T14:37:29Z
|
https://github.com/tflearn/tflearn/issues/611
|
[] |
edgedislocation
| 0
|
albumentations-team/albumentations
|
machine-learning
| 2,394
|
[New feature] Add apply_to_images to CLAHE
|
open
|
2025-03-11T00:59:04Z
|
2025-03-11T00:59:11Z
|
https://github.com/albumentations-team/albumentations/issues/2394
|
[
"enhancement",
"good first issue"
] |
ternaus
| 0
|
|
amidaware/tacticalrmm
|
django
| 1,057
|
ENHANCEMENT: Add option to enable Proxy Protocol on TRMM Nginx Container
|
**Is your feature request related to a problem? Please describe.**
If TRMM is hosted behind a Load Balancer, it does not get the real IP of the client host in the logs.
**Describe the solution you'd like**
The solution to this problem is the use of `Proxy Protocol`, see https://docs.nginx.com/nginx/admin-guide/load-balancer/using-proxy-protocol/
- It should not be enabled by default, as this will break existing setups unless it's turned on at both the Load Balancer and ingress levels.
- Instead, it should be configurable by an environment variable e.g. `PROXY_PROTOCOL: true`, this way it is compatible with both K8s & docker-compose
- Setting `proxy protocol` should also enable it for the `ssl` binding (4443 for the TRMM non-root nginx container)
- There also needs to be an option to configure the `set_real_ip_from` directive in order to set it to the IP of the load balancer (or in the case of K8s, the IP CIDR range of the pod virtual network); it might like something like this: `REAL_IP_FROM: 192.168.0.0/16`
**Describe alternatives you've considered**
Using a custom NGINX container
**Additional context**
Add any other context or screenshots about the feature request here.
|
closed
|
2022-04-08T18:40:48Z
|
2022-06-21T04:22:00Z
|
https://github.com/amidaware/tacticalrmm/issues/1057
|
[
"question"
] |
joeldeteves
| 8
|
aiogram/aiogram
|
asyncio
| 835
|
Bot API 5.7
|
# Bot API 5.7
- Added support for Video Stickers.
- Added the field is_video to the classes Sticker and StickerSet.
- Added the parameter webm_sticker to the methods createNewStickerSet and addStickerToSet.
#Bot API 5.6
- Improved support for Protected Content.
- Added the parameter protect_content to the methods sendMessage, sendPhoto, sendVideo, sendAnimation, sendAudio, sendDocument, sendSticker, sendVideoNote, sendVoice, sendLocation, sendVenue, sendContact, sendPoll, sendDice, sendInvoice, sendGame, sendMediaGroup, copyMessage, forwardMessage to allow sending messages with protected content to any chat.
- Added support for spoiler entities, which will work in Telegram versions released after December 30, 2021. Older clients will display unsupported message.
- Added new MessageEntity type “spoiler”.
- Added the ability to specify spoiler entities using HTML and MarkdownV2 formatting options.
|
closed
|
2022-02-16T22:05:00Z
|
2022-02-20T13:04:43Z
|
https://github.com/aiogram/aiogram/issues/835
|
[] |
JrooTJunior
| 0
|
deeppavlov/DeepPavlov
|
nlp
| 1,139
|
Plans to trained bigger BERT/RoBERTa/T5 models for Russian language?
|
Hello!
Do you have any plans for training larger transformers models, something from the last architectures (Reformer specifically) or BERT?
Or maybe you have plans in the opposite direction: to train TinyBERT with https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/TinyBERT ?
|
closed
|
2020-02-26T22:07:28Z
|
2020-04-30T11:07:19Z
|
https://github.com/deeppavlov/DeepPavlov/issues/1139
|
[] |
avostryakov
| 2
|
custom-components/pyscript
|
jupyter
| 58
|
registered_triggers is not defined in new app
|
Using the below code for a new app and no matter what I try I can't get it to not throw an error on `registered_triggers`
```
registered_triggers = []
# start the app
@time_trigger('startup')
def personTrackerStartup():
loadApp('location', makePersonTracker)
def makePersonTracker(config):
global registered_triggers
personID = config['person']
personName = personID.split('.')[1]
@task_unique(f'{personName}_tracker')
@state_trigger(f'{personID} != {personID}.old')
def tracker(value=None):
return
# register to global scope
registered_triggers.append(tracker) # registered_triggers.append(tracker): NameError: global name 'registered_triggers' is not defined
def loadApp(app_name, factory):
if 'apps' not in pyscript.config:
return
if app_name not in pyscript.config['apps']:
return
for app in pyscript.config['apps'][app_name]:
factory(app)
```
I followed the Wiki for the structure but there's a good chance I'm doing something wrong.
|
closed
|
2020-10-27T17:35:20Z
|
2020-10-29T01:12:58Z
|
https://github.com/custom-components/pyscript/issues/58
|
[] |
Nxt3
| 16
|
Textualize/rich
|
python
| 2,461
|
[REQUEST] Improved uninstall for `rich.traceback.install` and `rich.pretty.install`
|
We are very excited to have recently adopted rich on the Kedro framework. In short, our setup is:
```
logging.config.dictConfig(logging_config) # where logging_config uses rich logging handler
rich.traceback.install(show_locals=True, suppress=[click])
rich.pretty.install()
```
Some people have reported various issues (e.g. https://github.com/Textualize/rich/issues/2455, https://github.com/Textualize/rich/issues/2408), and more generally, users have asked whether they can uninstall rich tracebacks. See also https://github.com/Textualize/rich/discussions/1947.
Now there are a few problems:
* the above code is baked into the framework and not accessible to Kedro users. Hence a user cannot easily capture the original exception handler that is returned by `rich.traceback.install` in order to restore it
* when running on IPython, `rich.traceback.install` modifies various things like `ip._showtraceback`. However, [the function only returns `sys.excepthook`](https://github.com/Textualize/rich/blob/master/rich/traceback.py#L134-L136). Hence, even if a user were able to capture the call as `old_excephook = rich.traceback.install()`, it would not provide sufficient information to fully undo the effects of `rich.traceback.install()`
* `rich.pretty.install` has caused fewer problems for users so is less important to try to uninstall, but it does not return anything at all and hence, even if you had access to the code that called `rich.pretty.install`, there's no way to reliably reverse the call
## Suggested solutions
A new `rich.traceback.uninstall` functionality that fully reverses the effect of `rich.traceback.install`. This would not require the user to be able to access the call to `install` in the first place and would also work on IPython. Similarly for `rich.pretty.uninstall` (but less important to us).
## Current workarounds
```
# Undo rich.traceback.install
import sys
sys.excepthook = sys.__excepthook__
# In IPython
from IPython.core.interactiveshell import InteractiveShell
ip._showtraceback = InteractiveShell()._showtraceback
ip.showtraceback = InteractiveShell().showtraceback
ip.showsyntaxerror = InteractiveShell().showsyntaxerror
# Undo rich.pretty.install
sys.displayhook = sys.__displayhook__
# In IPython
from IPython.core.formatters import PlainTextFormatter
ip = get_ipython()
ip.display_formatter.formatters["text/plain"] = PlainTextFormatter()
```
This is not quite right because it doesn't restore things to how they were before the rich `install`s; it just resets them to the Python/IPython defaults. Hence on platforms such as Databricks, where it's difficult to figure out what the correct in-built Databricks customisations to exception handling etc., the above uninstalls aren't correct because they don't restore the settings to their databricks pre-rich-install state.
|
open
|
2022-08-11T08:56:35Z
|
2023-05-25T15:28:40Z
|
https://github.com/Textualize/rich/issues/2461
|
[
"Needs triage"
] |
antonymilne
| 5
|
pytorch/vision
|
machine-learning
| 8,126
|
Models input/output conventions (shape, preproc/postproc, etc.)
|
Not an issue, just writing this down for reference.
(**TODO** perhaps it'd be useful to clarify whether models support arbitrary H and W)
## Img Classification
**Input**
Shape (N, C, H, W).
Must be scaled from [0, 1] into specific scale
**Output**
Shape (N, num_classes), logits
**preproc and postproc within model** None
## Semantic segmentation
**Input**
Shape (N, C, H, W). The expected H,W as per weight preproc is (520, 520), not enforced.
Must be scaled from [0, 1] into specific scale
**Output** (Both training and eval)
OrderedDict with keys:
- "out", shape (N, num_classes, H, W) logits where H, W are the input image size.
- "aux", only for deeplabv3 and fcn and if aux_loss is True. Same shape as "out".
**preproc within model** None
**postproc within model** The masks are predicted at lower resolutions so they get upsampled at the end of `forward()`. Note that the `weights.transforms()` may resize the input, which is the shape that the model will then output, and may be different from the original image size.
## Video classification
**Input**
Shape (N, C, T, H, W). T = number of frames in each clip / sample. C is num_channels as usual.
Must be scaled from [0, 1] into custom scale
**Output**
(N, num_classes) logits.
## Optical Flow
**Input**
2 imgs of shape (N, C, H, W). H, W must be multiples of 8.
Scale: [-1, 1]
Note that the weight transforms don't resize so they don't ensure the input is
divisible by 8.
**Output**
list of tensors of shape (N, 2, H, W). 2 channels for horiz and vertical directions. len(list) = num_flow_updates.
**postproc within model** predicted flows are up-sampled to the original image size. This is just a call to `interpolate()` for raft_small but for raft_large, the upsampling is *learned* during training so it needs to be done in the model.
## Object detection
This includes object detection (Mask-RCNN) and ignores keypoint detection which is fairly similar.
**Input**
list of images of shape (C, H, W). Each image can have a different shape (it will be pre-processed). len(list) == N.
scale: [0, 1]
During training, must also pass target as a list (len=N) of dicts where each dict has keys:
- "boxes": shape (num_instances, 4) in XYXY format
- "labels": shape (num_instances,)
- "masks": shape (num_instances, H, W) for instance seg.
**Output**
- when training: dict[str, Tensor] of losses, e.g. for fasterrcnn:
- "loss_classifier"
- "loss_box_reg"
- "loss_objectness"
- "loss_rpn_box_reg"
- "loss_mask" for instance seg
- when eval: list (len=N) of dicts with keys:
- "boxes": shape (num_det, 4) in XYXY format
- "labels": shape (num_det,)
- "scores": shape (num_det,)
- "masks": shape (num_det, 1, H, W) for instance seg. Value aren't logits, they're probabilities of each pixel being part of the instance.
Since forever, users have requested the output to be unified https://github.com/pytorch/vision/issues/1574 across training and eval.
These are the only models where the loss is computed within the model, and not outside of it.
**preproc within model** (training and eval)
- scaling from [0, 1] to model-dependent scale
- resizing of image (for kp-rcnn, randomly resizing for augmentation) based on model-depedent min_size and max_size. Even after resizing, images may still differ in size. They are copied into a NCHW batch where HW depends on the max dims of each image *and* the `size_divisible` param. masks and boxes are resized accordingly. ImageList contains that batch along with the sizes of the images *after* they were resized: this is what the model will use internaly, and those sizes will be needed as well in postproc.
**postproc within model**
- during training, nothing is done. There's no need, these models only return the losses and they can be computed on the pre-processed inputs.
- during eval: boxes and masks are resized back to the original image size.
|
open
|
2023-11-20T14:24:08Z
|
2023-11-20T14:24:49Z
|
https://github.com/pytorch/vision/issues/8126
|
[] |
NicolasHug
| 0
|
seleniumbase/SeleniumBase
|
pytest
| 2,883
|
Improve multithreading with UC Mode 4.28.x
|
# Improve multithreading with UC Mode 4.28.x
In case you missed https://github.com/seleniumbase/SeleniumBase/issues/2865, `4.28.0` added a new UC Mode method: `uc_gui_handle_cf()`, which uses `pyautogui` to click Cloudflare checkboxes with the keyboard while the driver is disconnected from Chrome. For those of you who might not be familiar with the basics of that, `pyautogui` keyboard actions only reach the active window on top (in the case of multiple windows). In order for the `pyautogui` action to be successful, the window with the CAPTCHA must remain on top for the duration of the `pyautogui` actions. In the case of `uc_gui_handle_cf()`, that duration is generally less than 2 seconds per call, even if you have a lot of windows open and being controlled at the same time. The "call" includes: Making the current window the active window on top, finding the iframe, switching into the iframe, making the checkbox the active element, and then clicking the checkbox by pressing the spacebar with `pyautogui`. To prevent that "call" from being disrupted, we need to use thread-locking to prevent other actions from making another window become the active one (for the entire duration of the "call").
Here are some actions that would make another window the active one:
* Launching a new browser.
* Calling `driver.switch_to.window()`.
* Human actions while scripts are running.
Thread-locking can be placed around browser launches that occur via SeleniumBase. It can also be placed around SeleniumBase methods that call `driver.switch_to.window()` indirectly. There isn't much that can be done about human actions while the scripts are running, or if people are calling `driver.switch_to.window()` directly from their scripts.
With the extra thread-locking added, we should be able to see a noticeable improvement in the success rate of `uc_gui_handle_cf()` calls when multiple threads are being used to spin up multiple browsers at the same time. As a side-effect, there may be some slowdowns when multiple threads are trying to change the active window at the same time, because only one thread will be able to perform such an action at one time. This special thread-locking will only take place during UC Mode. For regular mode, there won't be any blockers from scripts performing actions.
|
closed
|
2024-06-28T20:03:57Z
|
2024-07-01T01:41:13Z
|
https://github.com/seleniumbase/SeleniumBase/issues/2883
|
[
"enhancement",
"UC Mode / CDP Mode"
] |
mdmintz
| 4
|
pyro-ppl/numpyro
|
numpy
| 1,646
|
Error when importing numpyro-0.13.1
|
UPDATE: Bumping JAX to the latest version seems to fix the problem
jax version = 0.4.14
Reproducible example in google colab: https://colab.research.google.com/drive/1R444hZjVV0KDaaksTE6Gf72DaH8rUqZt?usp=sharing
```
[/usr/local/lib/python3.10/dist-packages/numpyro/__init__.py](https://localhost:8080/#) in <module>
4 import logging
5
----> 6 from numpyro import compat, diagnostics, distributions, handlers, infer, ops, optim
7 from numpyro.distributions.distribution import enable_validation, validation_enabled
8 from numpyro.infer.inspect import render_model
[/usr/local/lib/python3.10/dist-packages/numpyro/infer/__init__.py](https://localhost:8080/#) in <module>
3
4 from numpyro.infer.barker import BarkerMH
----> 5 from numpyro.infer.elbo import (
6 ELBO,
7 RenyiELBO,
[/usr/local/lib/python3.10/dist-packages/numpyro/infer/elbo.py](https://localhost:8080/#) in <module>
23 log_density,
24 )
---> 25 from numpyro.ops.provenance import eval_provenance
26 from numpyro.util import _validate_model, check_model_guide_match, find_stack_level
27
[/usr/local/lib/python3.10/dist-packages/numpyro/ops/provenance.py](https://localhost:8080/#) in <module>
6 import jax.core as core
7 from jax.experimental.pjit import pjit_p
----> 8 import jax.extend.linear_util as lu
9 from jax.interpreters.partial_eval import trace_to_jaxpr_dynamic
10 from jax.interpreters.pxla import xla_pmap_p
ModuleNotFoundError: No module named 'jax.extend.linear_util'
```
|
closed
|
2023-09-22T20:55:32Z
|
2023-09-22T22:46:23Z
|
https://github.com/pyro-ppl/numpyro/issues/1646
|
[
"bug"
] |
ziatdinovmax
| 1
|
jacobgil/pytorch-grad-cam
|
computer-vision
| 462
|
If the target layer of model is encapsulated, and there are multiple outputs
|
how to set the value of target layer?
this is my target model:[https://github.com/HRNet/HRNet-Semantic-Segmentation](url)
|
open
|
2023-10-23T03:15:06Z
|
2023-10-23T03:16:56Z
|
https://github.com/jacobgil/pytorch-grad-cam/issues/462
|
[] |
douciyy
| 0
|
mwaskom/seaborn
|
pandas
| 3,253
|
How to move the legend of sns.jointplot() with kind='kde'?
|
Hi,
I am trying to move the legend of a jointplot, but none of the methods works.
I tired
```
import seaborn as sns
penguins = sns.load_dataset("penguins")
sns.jointplot(data=penguins, x="bill_length_mm", y="bill_depth_mm", hue="species")
```

```
sns.jointplot(data=penguins, x="bill_length_mm", y="bill_depth_mm", hue="species", legend_out=True)
>>> AttributeError: 'PathCollection' object has no property 'legend_out'
g = sns.jointplot(data=penguins, x="bill_length_mm", y="bill_depth_mm", hue="species")
g._legend.set_bbox_to_anchor((1.1, 1.1))
>>> AttributeError: 'JointGrid' object has no attribute '_legend'
g = sns.jointplot(data=penguins, x="bill_length_mm", y="bill_depth_mm", hue="species")
sns.move_legend(g, "upper left", bbox_to_anchor=(.55, .45), title='Species')
>>> TypeError: `obj` must be a seaborn Grid or matplotlib Axes or Figure instance.
```
This works for 0.11.2, but not for 0.12.2:
```
g = sns.jointplot(data=penguins, x="bill_length_mm", y="bill_depth_mm", hue="species")
g.ax_joint.legend(bbox_to_anchor=(1.5, 1.1))
```
In seaborn 0.12.2 the legend is moved, but also cleared of any labels.
---
Also `penguins = sns.load_dataset("penguins")` does not work with 0.12.2. Just ran into this while generating this issue.
```
Cell In[2], line 3
1 import seaborn as sns
----> 3 penguins = sns.load_dataset("penguins")
5 sns.jointplot(data=penguins, x="bill_length_mm", y="bill_depth_mm", hue="species")
File /bulk/LSARP/envs/conda/pycaret/lib/python3.8/site-packages/seaborn/utils.py:584, in load_dataset(name, cache, data_home, **kws)
581 url = f"[https://raw.githubusercontent.com/mwaskom/seaborn-data/master/{name}.csv](https://raw.githubusercontent.com/mwaskom/seaborn-data/master/%7Bname%7D.csv)"
583 if cache:
--> 584 cache_path = os.path.join(get_data_home(data_home), os.path.basename(url))
585 if not os.path.exists(cache_path):
586 if name not in get_dataset_names():
File /bulk/LSARP/envs/conda/pycaret/lib/python3.8/site-packages/seaborn/utils.py:534, in get_data_home(data_home)
532 data_home = os.path.expanduser(data_home)
533 if not os.path.exists(data_home):
--> 534 os.makedirs(data_home)
535 return data_home
File /bulk/LSARP/envs/conda/pycaret/lib/python3.8/os.py:223, in makedirs(name, mode, exist_ok)
221 return
222 try:
--> 223 mkdir(name, mode)
224 except OSError:
225 # Cannot rely on checking for EEXIST, since the operating system
226 # could give priority to other errors like EACCES or EROFS
227 if not exist_ok or not path.isdir(name):
>>>OSError: [Errno 38] Function not implemented: '/home/jovyan/.cache/seaborn'
```
Note: This is probably because I have to use Python 3.8.16.
|
closed
|
2023-02-09T17:51:06Z
|
2023-02-09T23:23:41Z
|
https://github.com/mwaskom/seaborn/issues/3253
|
[] |
sorenwacker
| 4
|
sinaptik-ai/pandas-ai
|
data-visualization
| 1,407
|
Additional guidance on configuring the pandasai.json file in the LLM setup process.
|
Path: /llms
|
closed
|
2024-10-24T08:23:23Z
|
2024-12-16T11:21:25Z
|
https://github.com/sinaptik-ai/pandas-ai/issues/1407
|
[
"documentation"
] |
Muhammad-Adam1
| 10
|
gradio-app/gradio
|
machine-learning
| 10,279
|
[Image] - Image is not rendered completely
|
### Describe the bug
If I load a 2 Megapixel image into the component it is not rendered correctly, only halfway and I get an error in the log. I need to keep the component area at 300x300.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
with gr.Blocks() as app:
gallery = gr.Image(label="Image", interactive=True, width=300, height=300, show_label=True)
app.launch(inbrowser=True)
```
### Screenshot

### Logs
```shell
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "f:\Projetos\DiffusersWebUI\venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 398, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "f:\Projetos\DiffusersWebUI\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 70, in __call__
return await self.app(scope, receive, send)
File "f:\Projetos\DiffusersWebUI\venv\lib\site-packages\fastapi\applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "f:\Projetos\DiffusersWebUI\venv\lib\site-packages\starlette\applications.py", line 113, in __call__
await self.middleware_stack(scope, receive, send)
File "f:\Projetos\DiffusersWebUI\venv\lib\site-packages\starlette\middleware\errors.py", line 187, in __call__
raise exc
File "f:\Projetos\DiffusersWebUI\venv\lib\site-packages\starlette\middleware\errors.py", line 165, in __call__
await self.app(scope, receive, _send)
File "f:\Projetos\DiffusersWebUI\venv\lib\site-packages\gradio\route_utils.py", line 789, in __call__
await self.app(scope, receive, send)
File "f:\Projetos\DiffusersWebUI\venv\lib\site-packages\starlette\middleware\exceptions.py", line 62, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "f:\Projetos\DiffusersWebUI\venv\lib\site-packages\starlette\_exception_handler.py", line 53, in wrapped_app
raise exc
File "f:\Projetos\DiffusersWebUI\venv\lib\site-packages\starlette\_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "f:\Projetos\DiffusersWebUI\venv\lib\site-packages\starlette\routing.py", line 715, in __call__
await self.middleware_stack(scope, receive, send)
File "f:\Projetos\DiffusersWebUI\venv\lib\site-packages\starlette\routing.py", line 735, in app
await route.handle(scope, receive, send)
File "f:\Projetos\DiffusersWebUI\venv\lib\site-packages\starlette\routing.py", line 288, in handle
await self.app(scope, receive, send)
File "f:\Projetos\DiffusersWebUI\venv\lib\site-packages\starlette\routing.py", line 76, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "f:\Projetos\DiffusersWebUI\venv\lib\site-packages\starlette\_exception_handler.py", line 53, in wrapped_app
raise exc
File "f:\Projetos\DiffusersWebUI\venv\lib\site-packages\starlette\_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "f:\Projetos\DiffusersWebUI\venv\lib\site-packages\starlette\routing.py", line 74, in app
await response(scope, receive, send)
File "f:\Projetos\DiffusersWebUI\venv\lib\site-packages\starlette\responses.py", line 348, in __call__
await self._handle_simple(send, send_header_only)
File "f:\Projetos\DiffusersWebUI\venv\lib\site-packages\starlette\responses.py", line 377, in _handle_simple
await send({"type": "http.response.body", "body": chunk, "more_body": more_body})
File "f:\Projetos\DiffusersWebUI\venv\lib\site-packages\starlette\_exception_handler.py", line 39, in sender
await send(message)
File "f:\Projetos\DiffusersWebUI\venv\lib\site-packages\starlette\_exception_handler.py", line 39, in sender
await send(message)
File "f:\Projetos\DiffusersWebUI\venv\lib\site-packages\starlette\middleware\errors.py", line 162, in _send
await send(message)
File "f:\Projetos\DiffusersWebUI\venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 495, in send
output = self.conn.send(event=h11.Data(data=data))
File "f:\Projetos\DiffusersWebUI\venv\lib\site-packages\h11\_connection.py", line 512, in send
data_list = self.send_with_data_passthrough(event)
File "f:\Projetos\DiffusersWebUI\venv\lib\site-packages\h11\_connection.py", line 545, in send_with_data_passthrough
writer(event, data_list.append)
File "f:\Projetos\DiffusersWebUI\venv\lib\site-packages\h11\_writers.py", line 65, in __call__
self.send_data(event.data, write)
File "f:\Projetos\DiffusersWebUI\venv\lib\site-packages\h11\_writers.py", line 91, in send_data
raise LocalProtocolError("Too much data for declared Content-Length")
h11._util.LocalProtocolError: Too much data for declared Content-Length
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Windows
gradio version: 5.9.1
gradio_client version: 1.5.2
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.4.0
audioop-lts is not installed.
fastapi: 0.115.4
ffmpy: 0.4.0
gradio-client==1.5.2 is not installed.
httpx: 0.27.0
huggingface-hub: 0.25.2
jinja2: 3.1.3
markupsafe: 2.1.5
numpy: 1.26.3
orjson: 3.10.6
packaging: 24.1
pandas: 2.2.2
pillow: 11.0.0
pydantic: 2.8.2
pydub: 0.25.1
python-multipart: 0.0.19
pyyaml: 6.0.1
ruff: 0.5.6
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.41.2
tomlkit: 0.12.0
typer: 0.12.3
typing-extensions: 4.12.2
urllib3: 2.2.2
uvicorn: 0.30.5
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.2.0
httpx: 0.27.0
huggingface-hub: 0.25.2
packaging: 24.1
typing-extensions: 4.12.2
websockets: 12.0
```
### Severity
I can work around it
|
closed
|
2025-01-02T20:30:07Z
|
2025-01-31T03:25:42Z
|
https://github.com/gradio-app/gradio/issues/10279
|
[
"bug"
] |
elismasilva
| 3
|
LAION-AI/Open-Assistant
|
machine-learning
| 3,714
|
when i click on start new message it doesn'[t click
|
closed
|
2023-10-19T10:38:39Z
|
2023-11-28T07:20:17Z
|
https://github.com/LAION-AI/Open-Assistant/issues/3714
|
[] |
ibgthb
| 1
|
|
ets-labs/python-dependency-injector
|
flask
| 410
|
Config passed in as dict
|
When I use the config object in the container definitions I can use it like an object but when I pass it trough ```Provide``` I get a dict. How can I also get an object here?
|
closed
|
2021-02-27T01:53:12Z
|
2021-02-27T12:37:28Z
|
https://github.com/ets-labs/python-dependency-injector/issues/410
|
[
"question"
] |
wackazong
| 3
|
django-import-export/django-import-export
|
django
| 1,098
|
Need row parameter for Resource.skip_row method
|
`skip_unchanged` still doesn't work for Many-to-Many (M2M) fields as described in #385. As commented by @irsalmashhor (https://github.com/django-import-export/django-import-export/issues/385#issuecomment-385288150), I think we need to pass `row` value to `skip_row` to inspect M2M field in the instance. Any ideas?
|
closed
|
2020-03-17T03:05:34Z
|
2021-05-05T20:36:55Z
|
https://github.com/django-import-export/django-import-export/issues/1098
|
[] |
okapies
| 1
|
ndleah/python-mini-project
|
data-visualization
| 279
|
enhance the Dice Rolling Stimulator
|
# Description
In this issue I want to modify the game's functionality by introducing a "try again" feature and restructuring the code into a class. The objective is to enhance the user experience by providing the option to reroll the dice without restarting the entire program, and also add docstring and update the Readme file to make easier for the users how to run the game.

<!-- Please include a summary of the issue.-->
## Type of issue
- [x] Feature (New Script)
- [ ] Bug
- [x] Documentation
## Checklist:
- [x] I have read the project guidelines.
- [x] I have checked previous issues to avoid duplicates.
- [x] This issue will be meaningful for the project.
<!-- Uncomment this in case you have a issue related to a bug in existing code.-->
<!--
- [ ] I have added screenshots of the bug
- [ ] I have added steps to reproduce the bug
- [ ] I have proposed a possible solution for the bug
-->
|
open
|
2024-06-10T09:56:08Z
|
2024-06-10T11:16:14Z
|
https://github.com/ndleah/python-mini-project/issues/279
|
[] |
Gabriela20103967
| 0
|
ultralytics/yolov5
|
machine-learning
| 12,586
|
ConfusionMatrix incorrect?
|
### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report.
### YOLOv5 Component
Validation
### Bug
I believe there is a problem in the ConfusionMatrix() method. It only counts the false positives if there are true positives.
n = matches.shape[0] > 0
m0, m1, _ = matches.transpose().astype(int)
for i, gc in enumerate(gt_classes):
j = m0 == i
if n and sum(j) == 1:
self.matrix[detection_classes[m1[j]], gc] += 1 # correct
else:
self.matrix[self.nc, gc] += 1 # true background
if n:
for i, dc in enumerate(detection_classes):
if not any(m1 == i):
self.matrix[dc, self.nc] += 1 # predicted background
I believe it should be:
n = matches.shape[0] > 0
m0, m1, _ = matches.transpose().astype(int)
for i, gc in enumerate(gt_classes):
j = m0 == i
if n and sum(j) == 1:
self.matrix[detection_classes[m1[j]], gc] += 1 # correct
else:
self.matrix[self.nc, gc] += 1 # true background
#NO IF STATEMENT HERE
for i, dc in enumerate(detection_classes):
if not any(m1 == i):
self.matrix[dc, self.nc] += 1 # predicted background
### Environment
_No response_
### Minimal Reproducible Example
n = matches.shape[0] > 0
m0, m1, _ = matches.transpose().astype(int)
for i, gc in enumerate(gt_classes):
j = m0 == i
if n and sum(j) == 1:
self.matrix[detection_classes[m1[j]], gc] += 1 # correct
else:
self.matrix[self.nc, gc] += 1 # true background
if n:
for i, dc in enumerate(detection_classes):
if not any(m1 == i):
self.matrix[dc, self.nc] += 1 # predicted background
### Additional
_No response_
### Are you willing to submit a PR?
- [X] Yes I'd like to help by submitting a PR!
|
closed
|
2024-01-05T18:58:52Z
|
2024-10-20T19:36:18Z
|
https://github.com/ultralytics/yolov5/issues/12586
|
[
"bug",
"Stale"
] |
GilSeamas
| 3
|
microsoft/unilm
|
nlp
| 804
|
How layoutlmv3 makes train at the character level
|
**Describe**
Model I am using (LayoutLMv3):
The model train entities for each bbox, but my entity is a part of the text in a bbox, and there will be multiple entities in a bbox. How should I deal with this situation?
|
closed
|
2022-07-26T12:33:51Z
|
2022-08-25T06:58:45Z
|
https://github.com/microsoft/unilm/issues/804
|
[] |
413542484
| 1
|
horovod/horovod
|
deep-learning
| 2,941
|
How to install Gloo on Ubuntu 18.04v?
|
I was trying to train my model on different hosts using this command:
`HOROVOD_WITH_TENSORFLOW=1 horovodrun --gloo -np 10 -H workstation-1:4,localhost:4 python train.py
`
However, I got this error:
`ValueError: Gloo support has not been built. If this is not expected, ensure CMake is installed and reinstall Horovod with HOROVOD_WITH_GLOO=1 to debug the build error.`
I searched google how to install gloo but found nothing. Could someone guide me how to install it on ubuntu 18.04?
|
closed
|
2021-05-26T08:13:08Z
|
2021-08-03T13:41:32Z
|
https://github.com/horovod/horovod/issues/2941
|
[
"question",
"wontfix"
] |
harrytrinh96
| 5
|
google-deepmind/sonnet
|
tensorflow
| 245
|
this portion of attention code looks incorrect
|
` attention_mlp = basic.BatchApply(
mlp.MLP([self._mem_size] * self._attention_mlp_layers))`
`for _ in range(self._num_blocks):
attended_memory = self._multihead_attention(memory)`
shouldnt it be this
`attended_memory = memory`
`for _ in range(self._num_blocks):`
`attended_memory = self._multihead_attention(attended_memory)`
i know memory isn't changed in that function too, so isn't this expected to be redundant.
|
open
|
2022-06-14T07:38:51Z
|
2022-06-14T07:40:17Z
|
https://github.com/google-deepmind/sonnet/issues/245
|
[] |
ava6969
| 0
|
skforecast/skforecast
|
scikit-learn
| 566
|
Usando Gaps en el pronóstico
|
Hola amigos:
Quiero ver la forma, y no he encontrado en el material muy bien preparado de skforecast, sobre cómo hacer un modelo de forecast que se entrene considerando un gap. Por ejemplo, El modelo considera las ventas de 5 días para predecir las ventas del séptimo, es decir hay un gap entre "y" y los rezagos del modelo, el cual empezaría en lag-2. En la única parte que he encontrado algo parecido ha sido en backtesting. ¿Me pueden decir si hay alguna parte dónde encontrar esto o bien cómo hacerlo?
Muchas gracias
Gabriel.
|
closed
|
2023-10-03T20:08:15Z
|
2023-11-02T08:18:06Z
|
https://github.com/skforecast/skforecast/issues/566
|
[
"question"
] |
GabrielCornejo
| 1
|
Gozargah/Marzban
|
api
| 1,384
|
ClashMeta Bugs: Mux & HTTPUpgrade
|
Problem 1: Mux is disabled in mux template for clash meta (but enabled at host settings), mux will be enabled at clash meta config
Xray server does not support any mux except mux.cool, idk how clashmeta can still connect to my server using h2mux! maybe they have a fallback to disable mux if it's not supported by the server
Problem 2: HTTPUpgrade config does not work with ClashMeta, something must be wrong
I have tested with and without early data, none of them worked
|
closed
|
2024-10-18T23:07:36Z
|
2024-11-04T05:27:57Z
|
https://github.com/Gozargah/Marzban/issues/1384
|
[
"Bug",
"Core"
] |
fodhelper
| 3
|
plotly/dash
|
flask
| 3,007
|
Patten matching callbacks do not warn if no matches exist
|
**Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
- replace the result of `pip list | grep dash` below
```
dash 2.18.1 A Python framework ...
dash-bootstrap-components 1.6.0 Bootstrap themed co...
dash-core-components 2.0.0 Core component suit...
dash-html-components 2.0.0 Vanilla HTML compon...
dash-table 5.0.0 Dash table
```
**Describe the bug**
When registering a pattern-matching callback, no warning is issued if the pattern does not match the ID of any of the DOM elements.
For example if we create a button with an ID like this:
```python
id={
"type": "delete-list-item",
"deletion-target": "some_path",
"extras": "..."
}
```
and then define a callback that only defines 2/3 of the keys present in the `id` dict:
```python
@app.callback(
Input(
{"type": "delete-list-item", "deletion-target": "some_path", }, "n_clicks"
),
)
def delete_list_item(n_clicks):
print(n_clicks)
```
The callback does not get attached to any element but does show up on the dev tools page.
**Expected behavior**
Under the default conditions (`app.config.suppress_callback_exceptions = False`), a warning should be emitted when no matching `id`s are found.
**Screenshots**
Not needed
|
open
|
2024-09-19T02:06:23Z
|
2024-09-23T14:31:26Z
|
https://github.com/plotly/dash/issues/3007
|
[
"bug",
"P3"
] |
farhanhubble
| 0
|
microsoft/nni
|
pytorch
| 4,927
|
RuntimeError: Has not supported replacing the module: `GroupNorm`
|
**Describe the issue**:
**Environment**:
- NNI version:
- Training service (local|remote|pai|aml|etc):
- Client OS:
- Server OS (for remote mode only):
- Python version:
- PyTorch/TensorFlow version:
- Is conda/virtualenv/venv used?:
- Is running in Docker?:
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**:
|
closed
|
2022-06-12T12:44:15Z
|
2022-09-16T09:19:36Z
|
https://github.com/microsoft/nni/issues/4927
|
[
"user raised",
"support"
] |
qhy991
| 2
|
lexiforest/curl_cffi
|
web-scraping
| 27
|
Can't install with poetry and pip
|
the latest version ( 0.4.0 ) can't be installed
My environment:
OS: MacOS 13.2.1 (22D68)
Python: 3.9.12
Pip: 22.3.1
poetry: 1.3.1
here is the error log
```
pip install curl_cffi ─╯
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Collecting curl_cffi
Downloading https://pypi.tuna.tsinghua.edu.cn/packages/33/9f/ef07f1c1348a7e2dd76be39fb534095014684a98cc64cb696c74cfcf5344/curl_cffi-0.4.0.tar.gz (75 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 75.2/75.2 kB 523.8 kB/s eta 0:00:00
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: cffi>=1.12.0 in ./.venv/lib/python3.9/site-packages (from curl_cffi) (1.15.1)
Requirement already satisfied: pycparser in ./.venv/lib/python3.9/site-packages (from cffi>=1.12.0->curl_cffi) (2.21)
Building wheels for collected packages: curl_cffi
Building wheel for curl_cffi (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for curl_cffi (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [114 lines of output]
/private/var/folders/67/bsl5phh57_vg4dl70j75gtjr0000gn/T/pip-build-env-umak2o_7/overlay/lib/python3.9/site-packages/setuptools/config/pyprojecttoml.py:108: _BetaConfiguration: Support for `[tool.setuptools]` in `pyproject.toml` is still *beta*.
warnings.warn(msg, _BetaConfiguration)
running bdist_wheel
running build
running build_py
creating build
creating build/lib.macosx-13.0-arm64-cpython-39
creating build/lib.macosx-13.0-arm64-cpython-39/curl_cffi
copying curl_cffi/build.py -> build/lib.macosx-13.0-arm64-cpython-39/curl_cffi
copying curl_cffi/__init__.py -> build/lib.macosx-13.0-arm64-cpython-39/curl_cffi
copying curl_cffi/_const.py -> build/lib.macosx-13.0-arm64-cpython-39/curl_cffi
creating build/lib.macosx-13.0-arm64-cpython-39/curl_cffi/requests
copying curl_cffi/requests/cookies.py -> build/lib.macosx-13.0-arm64-cpython-39/curl_cffi/requests
copying curl_cffi/requests/session.py -> build/lib.macosx-13.0-arm64-cpython-39/curl_cffi/requests
copying curl_cffi/requests/__init__.py -> build/lib.macosx-13.0-arm64-cpython-39/curl_cffi/requests
copying curl_cffi/requests/errors.py -> build/lib.macosx-13.0-arm64-cpython-39/curl_cffi/requests
copying curl_cffi/requests/headers.py -> build/lib.macosx-13.0-arm64-cpython-39/curl_cffi/requests
running egg_info
writing curl_cffi.egg-info/PKG-INFO
writing dependency_links to curl_cffi.egg-info/dependency_links.txt
writing requirements to curl_cffi.egg-info/requires.txt
writing top-level names to curl_cffi.egg-info/top_level.txt
reading manifest file 'curl_cffi.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'curl_cffi/cacert.pem'
warning: no files found matching 'curl_cffi/*.dll'
adding license file 'LICENSE'
writing manifest file 'curl_cffi.egg-info/SOURCES.txt'
/private/var/folders/67/bsl5phh57_vg4dl70j75gtjr0000gn/T/pip-build-env-umak2o_7/overlay/lib/python3.9/site-packages/setuptools/command/build_py.py:202: SetuptoolsDeprecationWarning: Installing 'curl_cffi.include' as data is deprecated, please list it in `packages`.
!!
############################
# Package would be ignored #
############################
Python recognizes 'curl_cffi.include' as an importable package,
but it is not listed in the `packages` configuration of setuptools.
'curl_cffi.include' has been automatically added to the distribution only
because it may contain data files, but this behavior is likely to change
in future versions of setuptools (and therefore is considered deprecated).
Please make sure that 'curl_cffi.include' is included as a package by using
the `packages` configuration field or the proper discovery methods
(for example by using `find_namespace_packages(...)`/`find_namespace:`
instead of `find_packages(...)`/`find:`).
You can read more about "package discovery" and "data files" on setuptools
documentation page.
!!
check.warn(importable)
/private/var/folders/67/bsl5phh57_vg4dl70j75gtjr0000gn/T/pip-build-env-umak2o_7/overlay/lib/python3.9/site-packages/setuptools/command/build_py.py:202: SetuptoolsDeprecationWarning: Installing 'curl_cffi.include.curl' as data is deprecated, please list it in `packages`.
!!
############################
# Package would be ignored #
############################
Python recognizes 'curl_cffi.include.curl' as an importable package,
but it is not listed in the `packages` configuration of setuptools.
'curl_cffi.include.curl' has been automatically added to the distribution only
because it may contain data files, but this behavior is likely to change
in future versions of setuptools (and therefore is considered deprecated).
Please make sure that 'curl_cffi.include.curl' is included as a package by using
the `packages` configuration field or the proper discovery methods
(for example by using `find_namespace_packages(...)`/`find_namespace:`
instead of `find_packages(...)`/`find:`).
You can read more about "package discovery" and "data files" on setuptools
documentation page.
!!
check.warn(importable)
copying curl_cffi/cdef.c -> build/lib.macosx-13.0-arm64-cpython-39/curl_cffi
copying curl_cffi/shim.c -> build/lib.macosx-13.0-arm64-cpython-39/curl_cffi
creating build/lib.macosx-13.0-arm64-cpython-39/curl_cffi/include
copying curl_cffi/include/shim.h -> build/lib.macosx-13.0-arm64-cpython-39/curl_cffi/include
creating build/lib.macosx-13.0-arm64-cpython-39/curl_cffi/include/curl
copying curl_cffi/include/curl/Makefile.am -> build/lib.macosx-13.0-arm64-cpython-39/curl_cffi/include/curl
copying curl_cffi/include/curl/curl.h -> build/lib.macosx-13.0-arm64-cpython-39/curl_cffi/include/curl
copying curl_cffi/include/curl/curlver.h -> build/lib.macosx-13.0-arm64-cpython-39/curl_cffi/include/curl
copying curl_cffi/include/curl/easy.h -> build/lib.macosx-13.0-arm64-cpython-39/curl_cffi/include/curl
copying curl_cffi/include/curl/header.h -> build/lib.macosx-13.0-arm64-cpython-39/curl_cffi/include/curl
copying curl_cffi/include/curl/mprintf.h -> build/lib.macosx-13.0-arm64-cpython-39/curl_cffi/include/curl
copying curl_cffi/include/curl/multi.h -> build/lib.macosx-13.0-arm64-cpython-39/curl_cffi/include/curl
copying curl_cffi/include/curl/options.h -> build/lib.macosx-13.0-arm64-cpython-39/curl_cffi/include/curl
copying curl_cffi/include/curl/stdcheaders.h -> build/lib.macosx-13.0-arm64-cpython-39/curl_cffi/include/curl
copying curl_cffi/include/curl/system.h -> build/lib.macosx-13.0-arm64-cpython-39/curl_cffi/include/curl
copying curl_cffi/include/curl/typecheck-gcc.h -> build/lib.macosx-13.0-arm64-cpython-39/curl_cffi/include/curl
copying curl_cffi/include/curl/urlapi.h -> build/lib.macosx-13.0-arm64-cpython-39/curl_cffi/include/curl
running build_ext
generating cffi module 'build/temp.macosx-13.0-arm64-cpython-39/curl_cffi._wrapper.c'
creating build/temp.macosx-13.0-arm64-cpython-39
building 'curl_cffi._wrapper' extension
creating build/temp.macosx-13.0-arm64-cpython-39/build
creating build/temp.macosx-13.0-arm64-cpython-39/build/temp.macosx-13.0-arm64-cpython-39
creating build/temp.macosx-13.0-arm64-cpython-39/curl_cffi
clang -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -I/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include -Icurl_cffi/include -I/Users/maoxiandaozhenhaowan/Desktop/Code/google-trends-service/.venv/include -I/Users/maoxiandaozhenhaowan/.pyenv/versions/3.9.12/include/python3.9 -c build/temp.macosx-13.0-arm64-cpython-39/curl_cffi._wrapper.c -o build/temp.macosx-13.0-arm64-cpython-39/build/temp.macosx-13.0-arm64-cpython-39/curl_cffi._wrapper.o
clang -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -I/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include -Icurl_cffi/include -I/Users/maoxiandaozhenhaowan/Desktop/Code/google-trends-service/.venv/include -I/Users/maoxiandaozhenhaowan/.pyenv/versions/3.9.12/include/python3.9 -c curl_cffi/shim.c -o build/temp.macosx-13.0-arm64-cpython-39/curl_cffi/shim.o
curl_cffi/shim.c:9:16: warning: unused variable 'opt_value' [-Wunused-variable]
CURLoption opt_value = (CURLoption) option;
^
1 warning generated.
clang -bundle -undefined dynamic_lookup -L/opt/homebrew/opt/readline/lib -L/opt/homebrew/opt/readline/lib -L/Users/maoxiandaozhenhaowan/.pyenv/versions/3.9.12/lib -L/opt/homebrew/lib -Wl,-rpath,/opt/homebrew/lib -L/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/lib -L/opt/homebrew/opt/readline/lib -L/opt/homebrew/opt/readline/lib -L/Users/maoxiandaozhenhaowan/.pyenv/versions/3.9.12/lib -L/opt/homebrew/lib -Wl,-rpath,/opt/homebrew/lib -L/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/lib build/temp.macosx-13.0-arm64-cpython-39/build/temp.macosx-13.0-arm64-cpython-39/curl_cffi._wrapper.o build/temp.macosx-13.0-arm64-cpython-39/curl_cffi/shim.o -L/usr/local/lib -lcurl-impersonate-chrome -o build/lib.macosx-13.0-arm64-cpython-39/curl_cffi/_wrapper.abi3.so
ld: library not found for -lcurl-impersonate-chrome
clang: error: linker command failed with exit code 1 (use -v to see invocation)
error: command '/usr/bin/clang' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for curl_cffi
Failed to build curl_cffi
ERROR: Could not build wheels for curl_cffi, which is required to install pyproject.toml-based projects
[notice] A new release of pip available: 22.3.1 -> 23.0.1
[notice] To update, run: pip install --upgrade pip
```
|
closed
|
2023-03-13T04:22:04Z
|
2023-03-13T05:24:32Z
|
https://github.com/lexiforest/curl_cffi/issues/27
|
[] |
MagicalBomb
| 1
|
twopirllc/pandas-ta
|
pandas
| 207
|
Unable to Use in GCP
|
**Which version are you running? The lastest version is on Github. Pip is for major releases.**
```python
import pandas_ta as ta
pandas-ta>=0.2.23b
```
**Upgrade.**
```sh
$ pip install -U git+https://github.com/twopirllc/pandas-ta
```
**Describe the bug**
A clear and concise description of what the bug is.
Fails on Import
**To Reproduce**
import pandas_ta as ta
ImportError: cannot import name 'version' from 'pandas_ta' (/env/local/lib/python3.7/site-packages/pandas_ta/__init__.py)
**Expected behavior**
The Module to Load
**Screenshots**
<img width="738" alt="Screen Shot 2021-02-01 at 10 00 40 PM" src="https://user-images.githubusercontent.com/50243740/106558818-45672300-64d9-11eb-87b2-61387c69e7b0.png">
<img width="1144" alt="Screen Shot 2021-02-01 at 10 00 24 PM" src="https://user-images.githubusercontent.com/50243740/106558820-45ffb980-64d9-11eb-8f9b-fa585fd70f6c.png">
**Additional context**
Add any other context about the problem here.
Thanks for using Pandas TA!
|
closed
|
2021-02-02T06:03:32Z
|
2021-05-18T03:03:48Z
|
https://github.com/twopirllc/pandas-ta/issues/207
|
[
"help wanted",
"info"
] |
jboshers1
| 6
|
voxel51/fiftyone
|
data-science
| 5,520
|
[BUG] Wrong Export of the KITTI dataset to YOLOv5 format
|
### Describe the problem
Fiftyone does not export a compliant version of the [KITTI dataset](https://docs.voxel51.com/dataset_zoo/datasets.html#dataset-zoo-kitti) in [yolov5 format](https://docs.voxel51.com/user_guide/export_datasets.html#yolov5dataset-export) when train and test split should be exported. I orientated my code on the [ultralytics fiftyone docs](https://docs.voxel51.com/integrations/ultralytics.html). The exported dataset should have the following format by [ultralytics yolo](https://docs.ultralytics.com/datasets/#oriented-bounding-boxes-obb):
dataset/
├──dataset.yaml
├── train/
│ ├── images/
│ └── labels/
└── val/
├── images/
└── labels/
However the output produces the following output without val labels:
dataset/
├──dataset.yaml
├── images/
│ ├── train/
│ └── val/
└── labels/
└── train/
The test dataset from KITTI has to be exported as val. This is a requirement of the format. The test labels also won't be export, if use test as export name. As result of the wrong folder structure no labels can be found.
### Code to reproduce issue
```Python
import fiftyone as fo
import fiftyone.zoo as foz
classes = ["Pedestrian", "Truck", "Car", "Cyclist", "DontCare", "Misc", "Van", "Tram", "Person_sitting"]
fo.config.database_uri = "mongodb://localhost:27017"
train_dataset = foz.load_zoo_dataset(
"kitti",
split="train",
)
train_dataset.export(
export_dir="kitti_yolo",
dataset_type=fo.types.YOLOv5Dataset,
label_field="ground_truth",
split="train",
classes=classes,
)
validation_dataset = foz.load_zoo_dataset(
"kitti",
split="test",
)
validation_dataset.export(
export_dir="kitti_yolo",
dataset_type=fo.types.YOLOv5Dataset,
label_field="ground_truth",
split="val",
classes=classes,
)
```
### System information
- **OS Platform and Distribution**: NixOS 24.05
- **Python version** 3.10.16:
- **FiftyOne version** : 1.3.0
- **FiftyOne installed from** (pip or source): pip
### Willingness to contribute
The FiftyOne Community encourages bug fix contributions. Would you or another
member of your organization be willing to contribute a fix for this bug to the
FiftyOne codebase?
- [ ] Yes. I can contribute a fix for this bug independently
- [ ] Yes. I would be willing to contribute a fix for this bug with guidance
from the FiftyOne community
- [x] No. I cannot contribute a bug fix at this time
|
open
|
2025-02-26T16:32:06Z
|
2025-03-03T20:08:41Z
|
https://github.com/voxel51/fiftyone/issues/5520
|
[
"bug"
] |
DJE98
| 7
|
SciTools/cartopy
|
matplotlib
| 1,778
|
importing cartopy.crs in Binder fails
|
I'm trying to load my notebook in binder and I seem to have issues with importing cartopy.crs. I guess this is somehow similar to #1740 but in binder.
I'm using a minimal environment.yml to test this
```
name: test-environment
-python=3.7
channels:
- conda-forge
dependencies:
- numpy=1.20.2
- matplotlib=3.4.1
- matplotlib-base=3.4.1
- pandas=1.2.3
- cartopy=0.17.0
- dask=2021.4.0
- pillow=8.1.2
```
~
And this is the error I'm getting:
```
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-4-57acb5775a36> in <module>
1 from matplotlib import pyplot as plt
----> 2 import cartopy.crs as ccrs
3 import cartopy.io.img_tiles as cimgt
4 # from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
5 # from cmocean import cm as ccmo
/srv/conda/envs/notebook/site-packages/cartopy/__init__.py in <module>
105 # Commonly used sub-modules. Imported here to provide end-user
106 # convenience.
--> 107 import cartopy.crs
108 import cartopy.feature # noqa: F401 (flake8 = unused import)
/srv/conda/envs/notebook/site-packages/cartopy/crs.py in <module>
34 import six
35
---> 36 from cartopy._crs import CRS, Geodetic, Globe, PROJ4_VERSION
37 from cartopy._crs import Geocentric # noqa: F401 (flake8 = unused import)
38 import cartopy.trace
ModuleNotFoundError: No module named 'cartopy._crs'
```
Is there some specific hidden requirement? I had to add specific versions of pillow because before that I was getting `ImportError: cannot import name _imaging`.
|
closed
|
2021-05-04T22:58:20Z
|
2021-05-12T04:52:58Z
|
https://github.com/SciTools/cartopy/issues/1778
|
[
"Type: Question",
"Component: installation"
] |
vrx-
| 7
|
gevent/gevent
|
asyncio
| 1,886
|
gevent.select.select still mark socket readable after all data is read on windows
|
* gevent version: 21.12 from pip
* Python version: cPython 3.7.6 downloaded from python.org
* Operating System: windows 7 x64
### Description:
It seems that there is a bug in `gevent.select.select` on windows. After a socket read all data from its peer, and run `select.select` again against it, it is still returned as readable, but in fact there is no data in buffer, and a following `.recv()` will lead to blocking.
test code:
```python
import gevent
import gevent.monkey
gevent.monkey.patch_all()
import gevent.socket
import gevent.select
server = gevent.socket.socket()
server.bind(('127.0.0.1', 12345))
server.listen(5)
def socket_proc(socket):
data = b''
while True:
r, w, x = gevent.select.select([socket, ], [], [socket, ], timeout=0.1)
if r:
data += socket.recv(1024)
else:
if data:
socket.send((u'%s\n' % len(data)).encode('utf-8'))
data = b''
def listen_proc(server):
while True:
socket, _ = server.accept()
gevent.spawn(socket_proc, socket)
print('start..')
listen_proc(server)
```
One may use any tcp client, such as netcat, to send a short piece of data to the listening port, and will not receive any reply. Debugging shows that when the program calls the `select()` for the 2nd time, it still return the socket object in `r`, since there is no pending data in it, the next call to `socket.recv()` blocks forever.
It seems this is a windows specific bug, as I've test it on linux and works without problem
I've done the following tests (all on widows, all python version are downloaded from python.org, gevent installed by pip)
- python 2.7.11+ gevent 1.2.2: works
- python 2.7.11+ gevent 1.3.0: buggy
- python 2.7.11+ gevent 1.4.0: buggy
- python 2.7.11+ gevent 1.5.0: buggy
- python 2.7.11+ gevent 21.12: buggy
- python 3.7.6+ gevent 21.12: buggy
So I guess this is something related to libuv
|
open
|
2022-05-19T09:24:38Z
|
2022-10-07T01:46:17Z
|
https://github.com/gevent/gevent/issues/1886
|
[
"Platform: Windows"
] |
adamhj
| 5
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.