repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
modin-project/modin
|
data-science
| 6,849
|
Possible to remove to_pandas calls in merge and join operations
|
In query_compiler.merge and query_compiler.join we use `to_pandas()` eg [right_pandas = right.to_pandas()](https://github.com/modin-project/modin/blob/7ef544f4467ddea18cfbb51ad2a6fcbbb12c0db3/modin/core/storage_formats/pandas/query_compiler.py#L525) in This operation in the main thread is blocking. This could be expensive if the right dataframe size is too large.
It is possible to remove the `to_pandas` calls by replacing `apply_full_axis` with broadcast_apply_full_axis and pass the `right` dataframe to `broadcast_apply_full_axis` as _Originally suggested by @YarShev in https://github.com/modin-project/modin/issues/5524#issuecomment-1880982441_ and
|
closed
|
2024-01-10T14:04:22Z
|
2024-01-11T16:18:29Z
|
https://github.com/modin-project/modin/issues/6849
|
[] |
arunjose696
| 1
|
aiortc/aiortc
|
asyncio
| 201
|
how can i get a certain channel and sample_rate like `channel=1 sample_rate=16000` when i use `MediaRecorder, MediaPlayer or AudioTransformTrack` to get audio frame ?
|
hi, i use MediaPlayer to read a wav file , and it not works when i change options parameters . to get origin data, i have to use channel=2 and sample_rate=48000 to save it .
does options works? please help me .
```python
async def save_wav(fine_name):
# player = MediaPlayer(fine_name, options={'channels': '1', 'sample_rate': '16000'})
with wave.open(fine_name, 'rb') as rf:
print("rf.getnchannels():", rf.getnchannels())
print("rf.getframerate():", rf.getframerate())
print("rf.getsampwidth():", rf.getsampwidth())
player = MediaPlayer(fine_name)
frames = []
frame = None
try:
frame = await player.audio.recv()
except Exception as e:
print("error 1:")
print("type(e):", type(e))
print(e)
while frame is not None:
for p in frame.planes:
data = p.to_bytes()
# print("p.buffer_size):", p.buffer_size)
# print("len(data):", len(data))
frames.append(data)
try:
frame = await player.audio.recv()
except Exception as e:
print("error 2:")
print("type(e):", type(e))
print(e)
frame = None
wave_save_path = os.path.join(file_path, "save.wav")
if wave_save_path is not None:
p = pyaudio.PyAudio()
CHANNELS = 2
FORMAT = pyaudio.paInt16
RATE = 48000
print("p.get_sample_size(FORMAT):", p.get_sample_size(FORMAT))
wf = wave.open(wave_save_path, 'wb')
wf.setnchannels(CHANNELS)
wf.setsampwidth(p.get_sample_size(FORMAT))
wf.setframerate(RATE)
wf.writeframes(b''.join(frames))
wf.close()
with wave.open(wave_save_path, 'rb') as rf:
print("rf.getnchannels():", rf.getnchannels())
print("rf.getframerate():", rf.getframerate())
print("rf.getsampwidth():", rf.getsampwidth())
```
```
2019-08-27 16:02:14,128 asyncio-DEBUG __init__:65 Using selector: EpollSelector
rf.getnchannels(): 1
rf.getframerate(): 16000
rf.getsampwidth(): 2
...
ALSA lib pcm.c:2495:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear
ALSA lib pcm.c:2495:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe
ALSA lib pcm.c:2495:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side
ALSA lib pcm_route.c:867:(find_matching_chmap) Found no matching channel map
ALSA lib pcm_route.c:867:(find_matching_chmap) Found no matching channel map
ALSA lib pcm_route.c:867:(find_matching_chmap) Found no matching channel map
ALSA lib pcm_route.c:867:(find_matching_chmap) Found no matching channel map
p.get_sample_size(FORMAT): 2
rf.getnchannels(): 2
rf.getframerate(): 48000
rf.getsampwidth(): 2
```
|
closed
|
2019-08-27T04:16:09Z
|
2023-12-10T16:30:53Z
|
https://github.com/aiortc/aiortc/issues/201
|
[] |
supermanhuyu
| 5
|
jadore801120/attention-is-all-you-need-pytorch
|
nlp
| 79
|
Question about beamsearch
|
hello,
Your code is very clear! However, I have a quesion about beam search in your code. Why did you do two sorting operations in transformer/beam.py? Is there any special purpose?

Thanks a lot!
|
closed
|
2018-12-27T12:32:34Z
|
2019-12-08T10:28:40Z
|
https://github.com/jadore801120/attention-is-all-you-need-pytorch/issues/79
|
[] |
ZhengkunTian
| 2
|
hzwer/ECCV2022-RIFE
|
computer-vision
| 142
|
关于新添加的VGG模型
|
您好h佬,最近在试图训练RIFE时,发现RIFE更新了新的VGG损失函数相关代码,但在rife.py中相关代码被注释掉了。单纯的取消注释并修改相关代码后并不能顺利运行,想请问这部分工作是否是完成的。不胜感激。
就像这样
```
self.vgg = VGGPerceptualLoss().to(device)
# loss_G = loss_l1 + loss_cons + loss_ter
loss_G = self.vgg(pred, gt) + loss_cons + loss_ter
```
When trying to train RIFE recently, I found that RIFE updated the new VGG loss function related code, but the related code in rife.py was commented out. Simply uncomment and modify the relevant code can not start train, I would like to ask whether this part of the work is completed. I am very grateful.
like,this
```
self.vgg = VGGPerceptualLoss().to(device)
# loss_G = loss_l1 + loss_cons + loss_ter
loss_G = self.vgg(pred, gt) + loss_cons + loss_ter
```
|
closed
|
2021-04-19T01:05:32Z
|
2021-04-21T09:03:28Z
|
https://github.com/hzwer/ECCV2022-RIFE/issues/142
|
[] |
98mxr
| 2
|
miguelgrinberg/python-socketio
|
asyncio
| 455
|
restricting access to socketio/client files with flask-socketio?
|
Hi,
i am having a problem where i am able to download certain socketio files from my flask application.
sending a call to https://localhost:port/socket.io/ downloads a file with 'sid' and other information.
Similarly i am able to download different socketio files like : socket.io.tar.gz and socket.io.arj and other socketio compressed files by appending the names in the path. e.g. https://localhost:port/socket.io/socket.io.tar.gz gives me the file.
after searching alot about this issue, one thing i think can be helpful is setting serveClient=false as it is recommended in production environment. but i am unable find any information on how and where to set this parameter in case of python flask socketio.
So my question is how can we restrict access to these files? is there some socketio side configuration or some Content-Security-Policy i am missing.
I ll be grateful if someone can help with this issue.
Thanks
|
closed
|
2020-04-01T10:39:12Z
|
2020-04-02T11:25:44Z
|
https://github.com/miguelgrinberg/python-socketio/issues/455
|
[
"question"
] |
raheel-ahmad
| 4
|
strawberry-graphql/strawberry
|
django
| 3,583
|
pyinstrument extension doesn't seem to give detail breakdown
|
<!-- Provide a general summary of the bug in the title above. -->
I'm using strawberry with fastapi running everything in docker. I've tried using the pyinstrument extension as per the doc but am not getting breakdown of the execution.
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Describe the Bug
When using pyinstruemnt extention, there is no breakdown. Only [Self] category.
<!-- A clear and concise description of what the bug is. -->
## System Information
- Operating system: MacOS
- Strawberry version (if applicable):
## Additional Context
<!-- Add any other relevant information about the problem here. -->
<img width="1369" alt="image" src="https://github.com/user-attachments/assets/7d23a35a-9005-48bc-9bf1-ef37fd08d9ec">
|
open
|
2024-07-27T16:32:23Z
|
2025-03-20T15:56:48Z
|
https://github.com/strawberry-graphql/strawberry/issues/3583
|
[
"bug"
] |
Vincent-liuwingsang
| 0
|
deepspeedai/DeepSpeed
|
machine-learning
| 6,692
|
Installing DeepSpeed in WSL.
|
I am using Windows 11. I have Windows Subsystem for Linux activated (Ubuntu) as well as installed CUDA, and Visual Studio C++ Build tools. I am trying to install deepspeed. However, I am getting the following 2 errors. Could anybody please help resolve this?


|
closed
|
2024-10-30T20:17:24Z
|
2024-11-08T22:11:37Z
|
https://github.com/deepspeedai/DeepSpeed/issues/6692
|
[
"install",
"windows"
] |
anonymous-user803
| 5
|
chezou/tabula-py
|
pandas
| 156
|
tabula merges column in HDFC bank statement
|
Tabula merges columns of last page in HDFC bank statement and even if there is any field which is empty in any page then tabula does not detect that field and even it writes the output of nest column.
tabula.read_pdf(pdfpath,pages='all',columns=['68.0,272.0,357.5,397.0,474.5,553.0'])
I even tried to take column co-ordinate but tabula is not detecting it like Camelot did
camelot separates columns if provided but tabula doesn't
|
closed
|
2019-06-26T13:17:36Z
|
2019-06-27T01:54:53Z
|
https://github.com/chezou/tabula-py/issues/156
|
[] |
ayubansal1998
| 1
|
modin-project/modin
|
pandas
| 6,849
|
Possible to remove to_pandas calls in merge and join operations
|
In query_compiler.merge and query_compiler.join we use `to_pandas()` eg [right_pandas = right.to_pandas()](https://github.com/modin-project/modin/blob/7ef544f4467ddea18cfbb51ad2a6fcbbb12c0db3/modin/core/storage_formats/pandas/query_compiler.py#L525) in This operation in the main thread is blocking. This could be expensive if the right dataframe size is too large.
It is possible to remove the `to_pandas` calls by replacing `apply_full_axis` with broadcast_apply_full_axis and pass the `right` dataframe to `broadcast_apply_full_axis` as _Originally suggested by @YarShev in https://github.com/modin-project/modin/issues/5524#issuecomment-1880982441_ and
|
closed
|
2024-01-10T14:04:22Z
|
2024-01-11T16:18:29Z
|
https://github.com/modin-project/modin/issues/6849
|
[] |
arunjose696
| 1
|
mwaskom/seaborn
|
data-visualization
| 3,219
|
A violinplot in a FacetGrid can ignore the `split` argument
|
I try to produce a FacetGrid containing violin plots with the argument `split=True` but the violins are not split. See the following example:
```python
import numpy as np
import pandas as pd
import seaborn as sns
np.random.seed(42)
df = pd.DataFrame(
{
"value": np.random.rand(20),
"condition": ["true", "false"]*10,
"category": ["right"]*10 + ["wrong"]*10,
}
)
g = sns.FacetGrid(data=df, col="category", hue="condition", col_wrap=5)
g.map(
sns.violinplot,
"value",
split=True, # it is irrelevant if this is present or not the violin plots are never split.
);
```
Plots can be seen here: https://stackoverflow.com/q/74617402/6018688
|
closed
|
2023-01-10T15:20:58Z
|
2023-01-10T23:40:53Z
|
https://github.com/mwaskom/seaborn/issues/3219
|
[] |
fabianegli
| 4
|
matplotlib/mplfinance
|
matplotlib
| 619
|
PnF charts using just close.
|
I have a dataset with only 1 column that represents close price. I am unable to generate a PnF chart as it appears that PnF expects date, open, high, low & close.
PnF charts are drawn using either High/Low or just considering close.
Is there a way to generate a PnF with just close?
|
closed
|
2023-05-29T16:10:02Z
|
2023-05-30T10:22:51Z
|
https://github.com/matplotlib/mplfinance/issues/619
|
[
"question"
] |
boomkap
| 4
|
NVlabs/neuralangelo
|
computer-vision
| 28
|
Cuda out of memory. Anyway to run training on 8GB GPU
|
Is there anyway to run training using nvidia gpu with only 8GB ? no matter how much time it take.
But because I can't train on my gpu nvidia 3070 with 8GB.
what parameters can I edit to solve this issue?
|
closed
|
2023-08-16T15:09:57Z
|
2023-08-25T06:24:26Z
|
https://github.com/NVlabs/neuralangelo/issues/28
|
[] |
parzoe
| 14
|
ghtmtt/DataPlotly
|
plotly
| 202
|
scatterplot of time-data cross-connects
|
**Describe the bug**
a bug plotting time-plots. the "line cross connects" a few places and runs back to start.
**To Reproduce**
1. plot this data geopkg provided as a scatter plot
- [time-plot-bug.zip](https://github.com/ghtmtt/DataPlotly/files/4298541/time-plot-bug.zip)
2. use x-field: format_date( "dato_tid",'yyyy-MM-dd hh:mm:ss')
3. use y-field: kotevann
4. check results in screenshot below
**Screenshots**


**Desktop (please complete the following information):**
- OS: win10
- QGIS release: 3.12.0
- DataPlotly release: 3.3
**Additional context**
I wonder if it have something with feature id? When i delete the "fid" (in the screenshot below) it works

|
closed
|
2020-03-06T13:38:54Z
|
2020-03-11T12:03:49Z
|
https://github.com/ghtmtt/DataPlotly/issues/202
|
[
"bug"
] |
danpejobo
| 5
|
mwaskom/seaborn
|
data-visualization
| 3,542
|
[BUG] Edge color with `catplot` with `kind=bar`
|
Hello,
When passing `edgecolor` to catplot for a bar, the argument doesn't reach the underlying `p.plot_bars` to generate the required output.
Currently there is a line
`edgecolor = p._complement_color(kwargs.pop("edgecolor", default), color, p._hue_map)`
is _not_ passed into the block `elif kind=="bar"`. A local "hack" I implemented is to add a `kwargs["edgecolor"] = edgecolor` before `p.plot_bars` call. Let me know if I should provide more details.
This is on version `0.13.0`.
|
closed
|
2023-10-27T07:33:09Z
|
2023-11-04T16:09:47Z
|
https://github.com/mwaskom/seaborn/issues/3542
|
[
"bug",
"mod:categorical"
] |
prabhuteja12
| 5
|
onnx/onnx
|
deep-learning
| 5,853
|
Request for Swish Op
|
# Swish/SiLU
Do you have any plans to implement the Swish Op in ONNX?
### Describe the operator
Swish is a popular Activation fuction. Its mathematical definition could be found at https://en.wikipedia.org/wiki/Swish_function
TensorFLow has https://www.tensorflow.org/api_docs/python/tf/nn/silu
Keras has https://keras.io/api/layers/activations/ (also in https://www.tensorflow.org/api_docs/python/tf/keras/activations/swish)
Pytorch has https://pytorch.org/docs/stable/generated/torch.nn.SiLU.html
### Can this operator be constructed using existing onnx operators?
Yes, it could be implemented as a combination of Mul and Sigmoid Ops:
x * Sigmoid (beta * x)
### Is this operator used by any model currently? Which one?
Yes. Modern Yolo series like yolov5, yolov7, yolov8, yolop and EfficientNet all have such Swish Ops.
Yolov5: https://github.com/ultralytics/yolov5/blob/master/models/tf.py#L224
EfficientNet:
https://paperswithcode.com/method/efficientnet which has Swish in https://github.com/lukemelas/EfficientNet-PyTorch/blob/2eb7a7d264344ddf15d0a06ee99b0dca524c6a07/efficientnet_pytorch/model.py#L294
### Are you willing to contribute it? (Y/N)
Possibly Yes.
### Notes
|
open
|
2024-01-11T08:18:22Z
|
2025-02-01T06:43:04Z
|
https://github.com/onnx/onnx/issues/5853
|
[
"topic: operator",
"stale",
"contributions welcome"
] |
vera121
| 7
|
sqlalchemy/sqlalchemy
|
sqlalchemy
| 11,994
|
missing type cast for jsonb in values for postgres?
|
### Discussed in https://github.com/sqlalchemy/sqlalchemy/discussions/11990
<div type='discussions-op-text'>
<sup>Originally posted by **JabberWocky-22** October 12, 2024</sup>
I'm using `values` to update multi rows in single execution in postgres.
The sql failed due to type match on jsonb column, since it doesn't have a type cast and take text as fallback.
Currently I can add type cast in update statement, but it will nice to add it in rendering just like uuid column.
To Reproduce:
```python
import sqlalchemy as sa
from sqlalchemy.dialects.postgresql import JSONB
table = sa.Table(
"test",
sa.MetaData(),
sa.Column("id", sa.Integer),
sa.Column("uuid", sa.Uuid, unique=True),
sa.Column("extra", JSONB),
)
engine = sa.create_engine("postgresql://xxx:xxx@localhost:1234/db")
with engine.begin() as conn:
table.create(conn, checkfirst=True)
conn.execute(
table.insert().values(
[
{"id": 1, "uuid": "d24587a1-06d9-41df-b1c3-3f423b97a755"},
{"id": 2, "uuid": "4b07e1c8-d60c-4ea8-9d01-d7cd01362224"},
]
)
)
value = sa.values(
sa.Column("id", sa.Integer),
sa.Column("uuid", sa.Uuid),
sa.Column("extra", JSONB),
name="update_data"
).data(
[
(
1,
"8b6ec1ec-b979-4d0b-b2ce-9acc6e4c2943",
{"foo": 1},
),
(
2,
"a2123bcb-7ea3-420a-8284-1db4b2759d79",
{"bar": 2},
),
]
)
conn.execute(
table.update()
.values(uuid=value.c.uuid, extra=value.c.extra)
.where(table.c.id == value.c.id)
)
```
traceback:
```bash
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.DatatypeMismatch) column "extra" is of type jsonb but expression is of type text
LINE 1: UPDATE test SET uuid=update_data.uuid, extra=update_data.ext...
^
HINT: You will need to rewrite or cast the expression.
[SQL: UPDATE test SET uuid=update_data.uuid, extra=update_data.extra FROM (VALUES (%(param_1)s, %(param_2)s::UUID, %(param_3)s), (%(param_4)s, %(param_5)s::UUID, %(param_6)s)) AS update_data (id, uuid, extra) WHERE test.id = update_data.id]
[parameters: {'param_1': 1, 'param_2': '8b6ec1ec-b979-4d0b-b2ce-9acc6e4c2943', 'param_3': '{"foo": 1}', 'param_4': 2, 'param_5': 'a2123bcb-7ea3-420a-8284-1db4b2759d79', 'param_6': '{"bar": 2}'}]
```
Versions:
SQLAlchemy: 2.0.35
psycopg2-binary: 2.9.3
db: PostgreSQL 15.2
Have a nice day</div>
|
closed
|
2024-10-13T09:00:38Z
|
2024-12-03T08:43:54Z
|
https://github.com/sqlalchemy/sqlalchemy/issues/11994
|
[
"postgresql",
"schema",
"use case"
] |
CaselIT
| 6
|
ageitgey/face_recognition
|
python
| 968
|
Crop image to preferred region to speed up detection
|
Hi everyone.
My problem is that I cannot resize the input image to smaller resolution (because faces then are to small to be detected) to apply cnn detection. My application scenario is that faces only appear in some specific region of a frame. Is it okay to keep the image the original size, but crop only the region that faces most likely appear, then apply face detection to that cropped region and then just use it for recognition without resizing anything? Is this okay? Have anyone tried this please give me some advice. Thank you a lot!!!
|
closed
|
2019-11-05T03:56:06Z
|
2019-11-05T07:42:59Z
|
https://github.com/ageitgey/face_recognition/issues/968
|
[] |
congphase
| 2
|
KevinMusgrave/pytorch-metric-learning
|
computer-vision
| 300
|
Error when using a Miner with ArcFaceLoss when training with Mixed Precision
|
I'm training my model with pytorch-lightning, using its mixed precision. When I tried to add a miner with the ArcFaceLoss, I got the following error:
```
File "/opt/conda/lib/python3.7/site-packages/pytorch_metric_learning/losses/base_metric_loss_function.py", line 34, in forward
loss_dict = self.compute_loss(embeddings, labels, indices_tuple)
File "/opt/conda/lib/python3.7/site-packages/pytorch_metric_learning/losses/large_margin_softmax_loss.py", line 105, in compute_loss
miner_weights = lmu.convert_to_weights(indices_tuple, labels, dtype=dtype)
File "/opt/conda/lib/python3.7/site-packages/pytorch_metric_learning/utils/loss_and_miner_utils.py", line 211, in convert_to_weights
weights[indices] = counts / torch.max(counts)
RuntimeError: Index put requires the source and destination dtypes match, got Half for the destination and Float for the source.
```
I guess the output of my model (which is a convolutional model) had `torch.float16` as its type and when converting the mined indices to weights, it ended up trying to put a `torch.float32` into the weights (which were created with the same dtype of my embedding).
I can see that the code actually tries to convert the `counts` to the same dtype:
```
counts = c_f.to_dtype(counts, dtype=dtype) / torch.sum(counts)
weights[indices] = counts / torch.max(counts)
```
However, when debugging, I noticed that it doesn't have any effect because `torch.is_autocast_enabled()` returns `True`.
|
closed
|
2021-04-08T11:01:39Z
|
2021-05-10T02:54:30Z
|
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/300
|
[
"bug",
"fixed in dev branch"
] |
fernandocamargoai
| 2
|
quokkaproject/quokka
|
flask
| 97
|
Relase PyPi package and change the core architecture
|
closed
|
2013-11-22T12:14:54Z
|
2015-07-16T02:56:42Z
|
https://github.com/quokkaproject/quokka/issues/97
|
[] |
rochacbruno
| 6
|
|
deeppavlov/DeepPavlov
|
nlp
| 1,628
|
Predictions NER_Ontonotes_BERT_Mult for entities with interpunction
|
DeepPavlov version: 1.0.2
Python version: 3.8
Operating system: Ubuntu
**Issue**:
I am using the `ner_ontonotes_bert_mult` model to predict entities for text. For sentences with interpunction in the entities, this gives unexpected results. Before the 1.0.0 release, I used the [Deeppavlov docker image](https://hub.docker.com/r/deeppavlov/base-cpu) with the `ner_ontonotes_bert_mult` config as well. I didn't encounter these issues with the older version of Deeppavlov.
**Content or a name of a configuration file**:
```
[ner_ontonotes_bert_mult](https://github.com/deeppavlov/DeepPavlov/blob/1.0.2/deeppavlov/configs/ner/ner_ontonotes_bert_mult.json)
```
**Command that led to the unexpected results**:
```python
from deeppavlov import build_model
deeppavlov_model = build_model(
"ner_ontonotes_bert_mult",
install=True,
download=True)
sentence = 'Today at 13:10 we had a meeting'
output = deeppavlov_model([sentence])
print(output[0])
[['Today', 'at', '13', ':', '10', 'we', 'had', 'a', 'meeting']]
print(output[1])
[['O', 'O', 'B-TIME', 'O', 'B-TIME', 'O', 'O', 'O', 'O']]
```
As you can see `13:10` is not recognized as a time entity as a whole, but `13` as `B-TIME`, `:` as O, and `10` as `B-time`. The same happens for names with interpunctions such as `E.A. Jones`. I was wondering what I could do to solve this issue, is it possible to fine-tune this model on such examples?
|
closed
|
2023-02-15T15:17:20Z
|
2023-03-16T08:22:50Z
|
https://github.com/deeppavlov/DeepPavlov/issues/1628
|
[
"bug"
] |
ronaldvelzen
| 3
|
marcomusy/vedo
|
numpy
| 882
|
Slice error
|
Examples with `msh.intersect_with_plane` (tried torus and bunny from `Mesh.slice`) produce the same error : AttributeError: module 'vedo.vtkclasses' has no attribute 'vtkPolyDataPlaneCutter'
|
closed
|
2023-06-14T17:07:00Z
|
2023-10-18T13:09:54Z
|
https://github.com/marcomusy/vedo/issues/882
|
[] |
mAxGarak
| 5
|
Evil0ctal/Douyin_TikTok_Download_API
|
fastapi
| 176
|
[BUG] Cannot read property 'JS_MD5_NO_COMMON_JS' of null
|
***发生错误的平台?***
抖音
***发生错误的端点?***
API-V1
http://127.0.0.1:8000/api?url=
***提交的输入值?***
https://www.douyin.com/video/7153585499477757192
***是否有再次尝试?***
是,发生错误后X时间后错误依旧存在。
***你有查看本项目的自述文件或接口文档吗?***
有,并且很确定该问题是程序导致的。官网的接口也试过,报500
|
closed
|
2023-03-14T03:23:14Z
|
2023-03-14T20:08:00Z
|
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/176
|
[
"BUG"
] |
WeiLi1201
| 3
|
pytest-dev/pytest-django
|
pytest
| 790
|
Tests do not run without django-configuration installed.
|
In my project I do not use django-configuration as I don't really need it.
When installing `pytest-django` I try to run my tests, here's what I've got:
```
pytest
Traceback (most recent call last):
File "/home/jakub/.virtualenvs/mobigol/bin/pytest", line 8, in <module>
sys.exit(main())
File "/home/jakub/.virtualenvs/mobigol/lib/python3.6/site-packages/_pytest/config/__init__.py", line 72, in main
config = _prepareconfig(args, plugins)
File "/home/jakub/.virtualenvs/mobigol/lib/python3.6/site-packages/_pytest/config/__init__.py", line 223, in _prepareconfig
pluginmanager=pluginmanager, args=args
File "/home/jakub/.virtualenvs/mobigol/lib/python3.6/site-packages/pluggy/hooks.py", line 286, in __call__
return self._hookexec(self, self.get_hookimpls(), kwargs)
File "/home/jakub/.virtualenvs/mobigol/lib/python3.6/site-packages/pluggy/manager.py", line 93, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "/home/jakub/.virtualenvs/mobigol/lib/python3.6/site-packages/pluggy/manager.py", line 87, in <lambda>
firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
File "/home/jakub/.virtualenvs/mobigol/lib/python3.6/site-packages/pluggy/callers.py", line 203, in _multicall
gen.send(outcome)
File "/home/jakub/.virtualenvs/mobigol/lib/python3.6/site-packages/_pytest/helpconfig.py", line 89, in pytest_cmdline_parse
config = outcome.get_result()
File "/home/jakub/.virtualenvs/mobigol/lib/python3.6/site-packages/pluggy/callers.py", line 80, in get_result
raise ex[1].with_traceback(ex[2])
File "/home/jakub/.virtualenvs/mobigol/lib/python3.6/site-packages/pluggy/callers.py", line 187, in _multicall
res = hook_impl.function(*args)
File "/home/jakub/.virtualenvs/mobigol/lib/python3.6/site-packages/_pytest/config/__init__.py", line 742, in pytest_cmdline_parse
self.parse(args)
File "/home/jakub/.virtualenvs/mobigol/lib/python3.6/site-packages/_pytest/config/__init__.py", line 948, in parse
self._preparse(args, addopts=addopts)
File "/home/jakub/.virtualenvs/mobigol/lib/python3.6/site-packages/_pytest/config/__init__.py", line 906, in _preparse
early_config=self, args=args, parser=self._parser
File "/home/jakub/.virtualenvs/mobigol/lib/python3.6/site-packages/pluggy/hooks.py", line 286, in __call__
return self._hookexec(self, self.get_hookimpls(), kwargs)
File "/home/jakub/.virtualenvs/mobigol/lib/python3.6/site-packages/pluggy/manager.py", line 93, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "/home/jakub/.virtualenvs/mobigol/lib/python3.6/site-packages/pluggy/manager.py", line 87, in <lambda>
firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
File "/home/jakub/.virtualenvs/mobigol/lib/python3.6/site-packages/pluggy/callers.py", line 208, in _multicall
return outcome.get_result()
File "/home/jakub/.virtualenvs/mobigol/lib/python3.6/site-packages/pluggy/callers.py", line 80, in get_result
raise ex[1].with_traceback(ex[2])
File "/home/jakub/.virtualenvs/mobigol/lib/python3.6/site-packages/pluggy/callers.py", line 187, in _multicall
res = hook_impl.function(*args)
File "/home/jakub/.virtualenvs/mobigol/lib/python3.6/site-packages/pytest_django/plugin.py", line 239, in pytest_load_initial_conftests
import configurations.importer
ModuleNotFoundError: No module named 'configurations'
```
Also my package versions:
```
pytest==5.3.1
pytest-django==3.3.0
pytest-factoryboy==2.0.3
```
Is `django-configuration` required? I couldn't find any information about it in the documentation.
|
closed
|
2020-01-04T17:42:33Z
|
2025-01-31T08:23:15Z
|
https://github.com/pytest-dev/pytest-django/issues/790
|
[
"bug"
] |
jakubjanuzik
| 6
|
ranaroussi/yfinance
|
pandas
| 2,155
|
Wrong time stamp for 1h time frame for versions after 0.2.44
|
### Describe bug
Whenever I am trying to download stock data for 1h time frame it shows wrong time frame. This problem was not present in 0.2.44 and previous versions. It was easier to use when it used to give output in local time frames
### Simple code that reproduces your problem
import yfinance as yf
print("\nyfinance version:", yf.__version__)
ticker = 'AAPL'
interval = '1h'
data = yf.download(ticker, period='1mo', interval=interval)
print("\nLast 7 Close Prices:")
print(data['Close'].tail(7))
### Debug log
I downloaded stock data of AAPL in 0.2.44 version this is the print out.
Last 7 Close Prices:
Datetime
2024-11-25 11:30:00-05:00 230.206802
2024-11-25 12:30:00-05:00 230.414993
2024-11-25 13:30:00-05:00 230.684998
2024-11-25 14:30:00-05:00 230.520004
2024-11-25 15:30:00-05:00 232.880005
2024-11-26 09:30:00-05:00 234.360001
2024-11-26 10:30:00-05:00 235.139999
But when I downloaded stock data of AAPL in 0.2.50 version this is the print out.
Ticker AAPL
Datetime
2024-11-25 16:30:00+00:00 230.206802
2024-11-25 17:30:00+00:00 230.414993
2024-11-25 18:30:00+00:00 230.684998
2024-11-25 19:30:00+00:00 230.520004
2024-11-25 20:30:00+00:00 232.880005
2024-11-26 14:30:00+00:00 234.360001
2024-11-26 15:30:00+00:00 235.313507
### Bad data proof


### `yfinance` version
0.2.50
### Python version
3.10
### Operating system
_No response_
|
open
|
2024-11-26T16:10:00Z
|
2025-02-16T20:12:37Z
|
https://github.com/ranaroussi/yfinance/issues/2155
|
[] |
indra5534
| 10
|
nonebot/nonebot2
|
fastapi
| 2,924
|
Plugin: LLOneBot-Master
|
### PyPI 项目名
nonebot-plugin-llob-master
### 插件 import 包名
nonebot_plugin_llob_master
### 标签
[{"label":"LLOneBot","color":"#e3e9e9"},{"label":"Windows","color":"#1da6eb"}]
### 插件配置项
_No response_
|
closed
|
2024-08-25T08:15:27Z
|
2024-08-27T13:20:46Z
|
https://github.com/nonebot/nonebot2/issues/2924
|
[
"Plugin"
] |
kanbereina
| 5
|
smarie/python-pytest-cases
|
pytest
| 91
|
Enforce file naming pattern: automatically get cases from file named `test_xxx_cases.py`
|
We suggest this pattern in the doc, we could make it a default.
|
closed
|
2020-06-02T12:57:26Z
|
2020-07-09T09:18:12Z
|
https://github.com/smarie/python-pytest-cases/issues/91
|
[
"enhancement"
] |
smarie
| 2
|
browser-use/browser-use
|
python
| 668
|
Unable to submit vulnerability report
|
### Type of Documentation Issue
Incorrect documentation
### Documentation Page
https://github.com/browser-use/browser-use/security/policy
### Issue Description
The documentation states that security issues should be reported by creating a report at https://github.com/browser-use/browser-use/security/advisories/new. However, accessing this link results in a 404 error.
It seems that the repository's private vulnerability reporting setting is not enabled.
ref: https://docs.github.com/en/code-security/security-advisories/working-with-repository-security-advisories/configuring-private-vulnerability-reporting-for-a-repository
Could you update the settings to allow vulnerability reports to be submitted?
### Suggested Changes
There is no need to update the document itself.
|
closed
|
2025-02-11T14:05:47Z
|
2025-02-22T23:31:32Z
|
https://github.com/browser-use/browser-use/issues/668
|
[
"documentation"
] |
melonattacker
| 5
|
jowilf/starlette-admin
|
sqlalchemy
| 477
|
Enhancement: Register Page
|
What about an out-of-the-box register page? We have a login page, why not have a register page? We can use `Fields` system to make the registration form more dynamic.
|
open
|
2024-01-17T12:20:00Z
|
2024-01-27T20:55:33Z
|
https://github.com/jowilf/starlette-admin/issues/477
|
[
"enhancement"
] |
hasansezertasan
| 0
|
snarfed/granary
|
rest-api
| 122
|
json feed: handle displayName as well as title
|
e.g. @aaronpk's articles feed: https://granary.io/url?input=html&output=jsonfeed&reader=false&url=http://aaronparecki.com/articles
|
closed
|
2017-12-05T20:59:31Z
|
2017-12-06T05:10:46Z
|
https://github.com/snarfed/granary/issues/122
|
[] |
snarfed
| 0
|
eriklindernoren/ML-From-Scratch
|
deep-learning
| 55
|
Moore-Penrose pseudo-inverse in linear regression
|
Hi, I am reimplementing ml algorithms based on yours
But I am a little confused about the part of the calculation of Moore-Penrose pseudoinverse in linear regression.
https://github.com/eriklindernoren/ML-From-Scratch/blob/40b52e4edf9485c4e479568f5a41501914fdc55c/mlfromscratch/supervised_learning/regression.py#L111-L114
Why do not use`np.linalg.pinv(X)`, according to the docstring:

It can compute the Moore-Penrose pseudo-inverse directly.
Thanks!
|
open
|
2019-06-21T03:27:28Z
|
2019-11-20T16:09:03Z
|
https://github.com/eriklindernoren/ML-From-Scratch/issues/55
|
[] |
liadbiz
| 3
|
fastapi/sqlmodel
|
pydantic
| 126
|
max_length does not work in Fields
|
### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from sqlmodel import SQLModel, Field
class Locations(SQLModel, table=True):
LocationName: str = Field(max_length=255)
```
### Description
When running the above code, I get a
`ValueError: On field "LocationName" the following field constraints are set but not enforced: max_length`
I also tried pydantic's documentation by changing the 'str' typing to `constr(max_length=255)` to no avail.
### Operating System
Windows
### Operating System Details
_No response_
### SQLModel Version
0.0.4
### Python Version
3.9.4
### Additional Context
_No response_
|
closed
|
2021-10-08T13:35:17Z
|
2024-04-03T16:05:03Z
|
https://github.com/fastapi/sqlmodel/issues/126
|
[
"question"
] |
yudjinn
| 7
|
ading2210/poe-api
|
graphql
| 170
|
KeyError: 'payload'
|
Getting the following error
```
INFO:root:Setting up session...
INFO:root:Downloading next_data...
Traceback (most recent call last):
File "/home/shubharthak/Desktop/apsaraAI/ChatGPT.py", line 10, in <module>
client = poe.Client(api)
File "/home/shubharthak/miniconda3/lib/python3.10/site-packages/poe.py", line 123, in __init__
self.connect_ws()
File "/home/shubharthak/miniconda3/lib/python3.10/site-packages/poe.py", line 366, in connect_ws
self.setup_connection()
File "/home/shubharthak/miniconda3/lib/python3.10/site-packages/poe.py", line 149, in setup_connection
self.next_data = self.get_next_data(overwrite_vars=True)
File "/home/shubharthak/miniconda3/lib/python3.10/site-packages/poe.py", line 198, in get_next_data
self.viewer = next_data["props"]["pageProps"]["payload"]["viewer"]
KeyError: 'payload'
```
Code:-
```
import poe
import sys
import logging
import os
api = os.environ.get('poe')
from IPython.display import display, Markdown, Latex
poe.logger.setLevel(logging.INFO)
client = poe.Client(api)
def say(msg):
ans = ''
for chunk in client.send_message('chinchilla', msg, with_chat_break=False):
# print(not chunk['text_new'].find("'''"))
# if not chunk['text_new'].find("'''"):
# print(chunk['text_new'], end=' ', flush=True)
ans += chunk['text_new']
#yield(chunk['text_new'])
print(chunk['text_new'], end = '', flush=True)
# return ans
#
# res = Markdown(ans + ' ')
# display(res)
# print(display(res), end=' ')
if __name__ == '__main__':
while True:
msg = input('> ')
if 'bye' in msg or 'exit' in msg:
break
say(msg)
print()
print('Thank you for using me')
```
|
closed
|
2023-07-17T19:29:04Z
|
2023-07-17T19:38:47Z
|
https://github.com/ading2210/poe-api/issues/170
|
[] |
shubharthaksangharsha
| 0
|
OWASP/Nettacker
|
automation
| 171
|
Issue in getting results via discovery funstion in service scanner
|
I was trying to perform the same operation on my localhost and results were different everytime.
```python
In [1]: from lib.payload.scanner.service.engine import discovery
In [2]: discovery("127.0.0.1")
Out[2]: {443: 'UNKNOWN', 3306: 'UNKNOWN'}
In [3]: discovery("127.0.0.1")
Out[3]:
{80: 'http',
443: 'UNKNOWN',
631: 'UNKNOWN',
3306: 'UNKNOWN',
5432: 'UNKNOWN',
8002: 'http'}
In [4]: discovery("127.0.0.1")
Out[4]:
{80: 'http',
139: 'UNKNOWN',
443: 'UNKNOWN',
445: 'UNKNOWN',
631: 'UNKNOWN',
3306: 'UNKNOWN',
5432: 'UNKNOWN',
8001: 'UNKNOWN',
8002: 'http'}
In [5]: discovery("127.0.0.1")
Out[5]:
{80: 'http',
139: 'UNKNOWN',
443: 'UNKNOWN',
445: 'UNKNOWN',
631: 'UNKNOWN',
3306: 'UNKNOWN',
5432: 'UNKNOWN',
8001: 'UNKNOWN',
8002: 'http'}
```

Am I doing anything wrong or is it some problem with the module!! Performing a port scan however works fine for me.
_________________
**OS**: `Ubuntu`
**OS Version**: `16.04`
**Python Version**: `2.7.12`
|
closed
|
2018-06-26T01:49:22Z
|
2021-02-02T20:28:14Z
|
https://github.com/OWASP/Nettacker/issues/171
|
[
"enhancement",
"possible bug"
] |
shaddygarg
| 8
|
newpanjing/simpleui
|
django
| 512
|
包括但不限于base.less等多个样式表文件对用户模型依赖问题
|
base.less 273 行:
```
#user_form{
background-color: white;
margin: 10px;
padding: 10px;
//color: #5a9cf8;
}
```
当用户模型User被swap以后,其名称不再是user,这个选择器将失效,从而导致样式有一些怪异。
|
open
|
2025-02-05T12:22:14Z
|
2025-02-05T12:22:14Z
|
https://github.com/newpanjing/simpleui/issues/512
|
[] |
WangQixuan
| 0
|
mljar/mljar-supervised
|
scikit-learn
| 447
|
Where can I find model details?
|
I want to see the best model parameters, preprocessing, etc so that I can reproduce it later or train the model again with the same parameters.
Thanks,
|
closed
|
2021-07-30T19:37:58Z
|
2021-08-28T19:56:47Z
|
https://github.com/mljar/mljar-supervised/issues/447
|
[] |
abdulwaheedsoudagar
| 1
|
fohrloop/dash-uploader
|
dash
| 73
|
Uploading multiple files shows wrong total file count in case of errors
|
Affected version: f033683 (flow-dev branch)
Steps to reproduce
1. Use `max_file_size` for `du.Upload`
2. Upload multiple files (folder of files) where part of files (e.g. 2 files) are below `max_file_size` and part of files (e.g. 2 files) is above `max_file_size`
3. The upload text total files will reflect the *original* amount of selected files, even though some of the files were dropped away because of `max_file_size`. (In the figure: `Uploading (64.00 Mb, File 1/4)` should be `Uploading (64.00 Mb, File 1/2)`)

|
closed
|
2022-02-24T19:23:56Z
|
2022-02-24T19:29:21Z
|
https://github.com/fohrloop/dash-uploader/issues/73
|
[
"bug"
] |
fohrloop
| 2
|
HumanSignal/labelImg
|
deep-learning
| 748
|
No module named 'libs.resources'
|
(ve) user1@comp1:~/path/to/labelImg$ python labelImg.py
Traceback (most recent call last):
File "labelImg.py", line 33, in <module>
from libs.resources import *
ModuleNotFoundError: No module named 'libs.resources'
Ubuntu
- **PyQt version:** 5
|
closed
|
2021-05-16T06:52:54Z
|
2021-05-17T05:00:50Z
|
https://github.com/HumanSignal/labelImg/issues/748
|
[] |
waynemystir
| 1
|
huggingface/diffusers
|
deep-learning
| 10,080
|
Proposal to add sigmas option to FluxPipeline
|
**Is your feature request related to a problem? Please describe.**
Flux's image generation is great, but it seems to have a tendency to remove too much detail.
**Describe the solution you'd like.**
Add the `sigmas` option to `FluxPipeline` to enable adjustment of the degree of noise removal.
**Additional context.**
I propose adding the `sigmas` option to `FluxPipeline`.
The details are as follows.
[Yntec](https://huggingface.co/Yntec) picked up a FLUX modification proposal from [Reddit](https://www.reddit.com/r/comfyui/comments/1g9wfbq/simple_way_to_increase_detail_in_flux_and_remove/), and [r3gm](https://huggingface.co/r3gm), the author of [stablepy](https://github.com/R3gm/stablepy), wrote the code for the logic part of the FLUX pipeline modification, and I made [a demo](https://huggingface.co/spaces/John6666/flux-sigmas-test) by making [a test commit on github](https://github.com/huggingface/diffusers/commit/ad3344e2be033887d854d2731757db8b80dcfb06), and it turned out that it was working as expected.
This time, we only modified the pipeline for T2I for testing. During discussions with r3gm, it was discovered that [the `sigmas` option, which exists in StableDiffusionXLPipeline](https://github.com/huggingface/diffusers/blob/c96bfa5c80eca798d555a79a491043c311d0f608/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py#L841), does not exist in `FluxPipeline`, so the actual implementation was switched to porting the `sigmas` option.
Also, in the current `FluxPipeline`, I also found a bug where specifying `timesteps` would probably result in an error because `sigmas` are hard-coded, even though the SDXL pipeline code is reused, so I fixed it while I was at it.
If you want to use Reddit's suggested **0.95**, specify it as follows.
```py
factor = 0.95
sigmas = np.linspace(1.0, 1 / num_inference_steps, num_inference_steps)
sigmas = sigmas * factor
image_sigmas = pipe(
prompt=prompt,
guidance_scale=guidance_scale,
num_inference_steps=num_inference_steps,
width=width,
height=height,
generator=generator,
output_type="pil",
sigmas=sigmas
).images[0]
```
I will post some samples that were actually generated in the demo.
Prompt: anthropomorphic pig Programmer with laptop, colorfull, funny / Seed: 9119

Prompt: A painting by Picasso of Hatsune Miku in an office. Desk, window, books. / Seed: 9119

Prompt: 80s cinematic colored sitcom screenshot. young husband with wife. festive scene at a copper brewery with a wooden keg of enjoying burrito juice in the center. sitting cute little daughter. Display mugs of dark beer. Closeup. beautiful eyes. accompanied by halloween Shirley ingredients. portrait smile / Seed: 9119

|
closed
|
2024-12-02T11:14:40Z
|
2024-12-02T18:16:49Z
|
https://github.com/huggingface/diffusers/issues/10080
|
[] |
John6666cat
| 3
|
AUTOMATIC1111/stable-diffusion-webui
|
deep-learning
| 15,773
|
[Bug]: CUDA error: an illegal instruction was encountered
|
### Checklist
- [ ] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
The Generation stops and display this error
CUDA error: an illegal instruction was encountered
Complete log given below.
### Steps to reproduce the problem
Occurs while generating the image after clicking the generate button.
### What should have happened?
Should have generated the output. It goes upto 10 percent or so then quits.
### What browsers do you use to access the UI ?
_No response_
### Sysinfo
[sysinfo-2024-05-13-05-11.json](https://github.com/AUTOMATIC1111/stable-diffusion-webui/files/15289638/sysinfo-2024-05-13-05-11.json)
### Console logs
```Shell
venv "D:\AI\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: v1.9.3
Commit hash: 1c0a0c4c26f78c32095ebc7f8af82f5c04fca8c0
Launching Web UI with arguments: --lowvram --precision full --no-half --skip-torch-cuda-test
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Loading weights [6ce0161689] from D:\AI\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Creating model from config: D:\AI\stable-diffusion-webui\configs\v1-inference.yaml
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
D:\AI\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
Startup time: 11.7s (prepare environment: 0.3s, import torch: 4.6s, import gradio: 1.3s, setup paths: 1.4s, initialize shared: 0.5s, other imports: 0.7s, load scripts: 1.1s, create ui: 0.9s, gradio launch: 0.7s).
Applying attention optimization: Doggettx... done.
Model loaded in 5.5s (load weights from disk: 1.1s, create model: 0.6s, apply weights to model: 3.3s, calculate empty prompt: 0.4s).
10%|████████▎ | 2/20 [00:06<00:58, 3.22s/it]Exception in thread MemMon:█▋ | 2/20 [00:02<00:23, 1.28s/it]
Traceback (most recent call last):
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "D:\AI\stable-diffusion-webui\modules\memmon.py", line 53, in run
free, total = self.cuda_mem_get_info()
File "D:\AI\stable-diffusion-webui\modules\memmon.py", line 34, in cuda_mem_get_info
return torch.cuda.mem_get_info(index)
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\memory.py", line 663, in mem_get_info
return torch.cuda.cudart().cudaMemGetInfo(device)
RuntimeError: CUDA error: an illegal instruction was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
*** Error completing request
*** Arguments: ('task(rxe7j3hmflc31l0)', <gradio.routes.Request object at 0x000002B1642C5A20>, 'hello', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File "D:\AI\stable-diffusion-webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "D:\AI\stable-diffusion-webui\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "D:\AI\stable-diffusion-webui\modules\txt2img.py", line 109, in txt2img
processed = processing.process_images(p)
File "D:\AI\stable-diffusion-webui\modules\processing.py", line 845, in process_images
res = process_images_inner(p)
File "D:\AI\stable-diffusion-webui\modules\processing.py", line 981, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "D:\AI\stable-diffusion-webui\modules\processing.py", line 1328, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "D:\AI\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 218, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "D:\AI\stable-diffusion-webui\modules\sd_samplers_common.py", line 272, in launch_sampling
return func()
File "D:\AI\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 218, in <lambda>
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\AI\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 237, in forward
x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "D:\AI\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "D:\AI\stable-diffusion-webui\modules\sd_hijack_utils.py", line 18, in <lambda>
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "D:\AI\stable-diffusion-webui\modules\sd_hijack_utils.py", line 32, in __call__
return self.__orig_func(*args, **kwargs)
File "D:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
out = self.diffusion_model(x, t, context=cc)
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward
return original_forward(self, x, timesteps, context, *args, **kwargs)
File "D:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
h = module(h, emb, context)
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1568, in _call_impl
result = forward_call(*args, **kwargs)
File "D:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
x = layer(x, context)
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 334, in forward
x = block(x, context=context[i])
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward
return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
File "D:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
return CheckpointFunction.apply(func, len(inputs), *args)
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\function.py", line 539, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "D:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
output_tensors = ctx.run_function(*ctx.input_tensors)
File "D:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 273, in _forward
x = self.attn2(self.norm2(x), context=context) + x
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\stable-diffusion-webui\modules\sd_hijack_optimizations.py", line 240, in split_cross_attention_forward
q, k, v = (rearrange(t, 'b n (h d) -> (b h) n d', h=h) for t in (q_in, k_in, v_in))
File "D:\AI\stable-diffusion-webui\modules\sd_hijack_optimizations.py", line 240, in <genexpr>
q, k, v = (rearrange(t, 'b n (h d) -> (b h) n d', h=h) for t in (q_in, k_in, v_in))
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\einops\einops.py", line 487, in rearrange
return reduce(tensor, pattern, reduction='rearrange', **axes_lengths)
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\einops\einops.py", line 410, in reduce
return _apply_recipe(recipe, tensor, reduction_type=reduction)
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\einops\einops.py", line 239, in _apply_recipe
return backend.reshape(tensor, final_shapes)
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\einops\_backends.py", line 84, in reshape
return x.reshape(shape)
RuntimeError: CUDA error: an illegal instruction was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
---
Traceback (most recent call last):
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
result = await self.call_function(
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
prediction = await anyio.to_thread.run_sync(
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
result = context.run(func, *args)
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
response = f(*args, **kwargs)
File "D:\AI\stable-diffusion-webui\modules\call_queue.py", line 77, in f
devices.torch_gc()
File "D:\AI\stable-diffusion-webui\modules\devices.py", line 81, in torch_gc
torch.cuda.empty_cache()
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\memory.py", line 159, in empty_cache
torch._C._cuda_emptyCache()
RuntimeError: CUDA error: an illegal instruction was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
### Additional information
I updated to the latest driver of NVIDIA.
I tried installing Fooocus but only becuase 1111 stopped working.
I have ooba gooba installed in my system. Maybe the CUDA version there is different?
Could it possibly clash with 1111?
|
open
|
2024-05-13T05:14:24Z
|
2024-10-18T10:23:20Z
|
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15773
|
[
"bug-report"
] |
ClaudeRobbinCR
| 1
|
tflearn/tflearn
|
data-science
| 578
|
possible typo line 584
|
currently:
self.train_var = to_list(self.train_vars)
should it be this?
self.train_vars = to_list(self.train_vars)
|
open
|
2017-01-29T07:18:39Z
|
2017-01-29T07:18:39Z
|
https://github.com/tflearn/tflearn/issues/578
|
[] |
ecohen1
| 0
|
pydantic/FastUI
|
pydantic
| 165
|
Proxy Support
|
I am running my fastapi behind a proxy. The issue I am running into:
assume my path is `https://example.com/proxy/8989/`
1. the root path is invoked
2. the HTMLResponse(prebuilt_html(title='FastUI Demo')) is sent
3. the following content is run
```
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>FastUI Demo</title>
<script type="module" crossorigin src="https://cdn.jsdelivr.net/npm/@pydantic/fastui-prebuilt@0.0.15/dist/assets/index.js"></script>
<link rel="stylesheet" crossorigin href="https://cdn.jsdelivr.net/npm/@pydantic/fastui-prebuilt@0.0.15/dist/assets/index.css">
</head>
<body>
<div id="root"></div>
</body>
</html>
```
4. The index.js is installed
5. Then it makes an fetch call with the react js being loaded to `https://example.com/api/proxy/8989/` instead of `https://example.com/proxy/8989/api/`
Is this expected? Any plans on fixing it?
|
closed
|
2024-01-25T17:36:33Z
|
2024-02-09T07:04:48Z
|
https://github.com/pydantic/FastUI/issues/165
|
[] |
stikkireddy
| 1
|
facebookresearch/fairseq
|
pytorch
| 5,244
|
facebook/mbart-large-50 VS facebook/mbart-large-50-many-to-many-mmt
|
Hi!!
I am conducting some experiments for NMT on low resource languages, for this purpose I am fine tuning mbart for in a number for directions English to Sinhala, English to Tamil and SInhala to Tamil. I am using huggingface platform to perform this finetuning, while selecting the model i have 2 questions, i will highly appreciate it if some one could help be?
1. What is the difference between mbart-large-50 and mbart-large-50-many-to-many? I have been using mbart-large-50 for finetuing since it supported the above mentioned languages, am i using the correct model for this purpose or should i go with many-to-many
2. I want to confirm the model size as well, is it 610M ?
Thank you in Advance!!
|
open
|
2023-07-09T11:23:31Z
|
2023-07-09T11:23:31Z
|
https://github.com/facebookresearch/fairseq/issues/5244
|
[
"question",
"needs triage"
] |
vmenan
| 0
|
pytest-dev/pytest-django
|
pytest
| 964
|
Drop support for unsupported Python and Django versions
|
closed
|
2021-11-22T11:05:01Z
|
2021-12-01T19:45:55Z
|
https://github.com/pytest-dev/pytest-django/issues/964
|
[] |
pauloxnet
| 2
|
|
QingdaoU/OnlineJudge
|
django
| 51
|
批量建立使用者
|
请问有什么方法可以批量建立使用者
|
closed
|
2016-06-24T15:18:56Z
|
2016-06-28T07:02:55Z
|
https://github.com/QingdaoU/OnlineJudge/issues/51
|
[] |
kevin50406418
| 1
|
littlecodersh/ItChat
|
api
| 101
|
如何获得群里的群友的头像的图片具体binary数据?
|
比如获得:
``` python
"HeadImgUrl": "/cgi-bin/mmwebwx-bin/webwxgetheadimg?seq=642242818&sername=@@21ec4b514edf3e7cb867e0512fb85f3a5e6deb657f4e8573d656bcd4558e3594&skey=",
```
但如何获得头像呢? 前面的域名应该是用什么?
|
closed
|
2016-10-16T20:31:54Z
|
2017-02-02T14:45:39Z
|
https://github.com/littlecodersh/ItChat/issues/101
|
[
"question"
] |
9cat
| 4
|
graphql-python/graphene-mongo
|
graphql
| 220
|
Releases v0.2.16 and v0.3.0 missing on PyPI
|
Thanks for the work on this project, happy user here!
I noticed that there are releases on https://github.com/graphql-python/graphene-mongo/releases that are not on https://pypi.org/project/graphene-mongo/#history. Is this an oversight or is there something preventing you from releasing on PyPI?
In the meantime I am installing v0.3.0 directly from github, but it would be nice if it were on PyPI instead.
|
open
|
2023-05-02T14:55:36Z
|
2023-07-31T07:20:21Z
|
https://github.com/graphql-python/graphene-mongo/issues/220
|
[] |
mathiasose
| 4
|
Lightning-AI/pytorch-lightning
|
data-science
| 20,391
|
Error if SLURM_NTASKS != SLURM_NTASKS_PER_NODE
|
### Bug description
Would it be possible for Lightning to raise an error if `SLURM_NTASKS != SLURM_NTASKS_PER_NODE` in case both are set?
With a single node the current behavior is:
* `SLURM_NTASKS == SLURM_NTASKS_PER_NODE`: Everything is fine
* `SLURM_NTASKS > SLURM_NTASKS_PER_NODE`: Slurm doesn't let you schedule the job and raises an error
* `SLURM_NTASKS < SLURM_NTASKS_PER_NODE`: Lightning thinks there are `SLURM_NTASKS_PER_NODE` devices but the job only runs on `SLURM_NTASKS` devices.
Example scripts:
```
#!/bin/bash
#SBATCH --ntasks=1
#SBATCH --nodes=1
#SBATCH --gres=gpu:2
#SBATCH --ntasks-per-node=2
#SBATCH --cpus-per-task=3
source .venv/bin/activate
srun python train_lightning.py
```
And `train_lightning.py`:
```
from pytorch_lightning.demos.boring_classes import BoringModel, BoringDataModule
from pytorch_lightning import Trainer
import os
def main():
print(
f"LOCAL_RANK={os.environ.get('LOCAL_RANK', 0)}, SLURM_NTASKS={os.environ.get('SLURM_NTASKS')}, SLURM_NTASKS_PER_NODE={os.environ.get('SLURM_NTASKS_PER_NODE')}"
)
model = BoringModel()
datamodule = BoringDataModule()
trainer = Trainer(max_epochs=100)
print(f"trainer.num_devices: {trainer.num_devices}")
trainer.fit(model, datamodule)
if __name__ == "__main__":
main()
```
This generates the following output:
```
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
HPU available: False, using: 0 HPUs
Initializing distributed: GLOBAL_RANK: 0, MEMBER: 1/1
----------------------------------------------------------------------------------------------------
distributed_backend=nccl
All distributed processes registered. Starting with 1 processes
----------------------------------------------------------------------------------------------------
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [1,2]
| Name | Type | Params | Mode
-----------------------------------------
0 | layer | Linear | 66 | train
-----------------------------------------
66 Trainable params
0 Non-trainable params
66 Total params
0.000 Total estimated model params size (MB)
1 Modules in train mode
0 Modules in eval mode
SLURM auto-requeueing enabled. Setting signal handlers.
LOCAL_RANK=0, SLURM_NTASKS=1, SLURM_NTASKS_PER_NODE=2
trainer.num_devices: 2
```
`MEMBER: 1/1` indicates that only 1 GPU is used but `trainer.num_devices` returns 2. `nvidia-smi` also indicates that only a single device is used.
Not sure if there is a valid use case for `SLURM_NTASKS < SLURM_NTASKS_PER_NODE`. But if there is not it would be awesome if Lightning could raise an error in this scenario.
The same error also happens if `--ntasks-per-node` is not set. In this case Lightning assumes 2 devices (I guess based on `CUDA_VISIBLE_DEVICES`) but in reality only a single one is used.
### What version are you seeing the problem on?
v2.4
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
* CUDA:
- GPU:
- NVIDIA GeForce RTX 4090
- NVIDIA GeForce RTX 4090
- NVIDIA GeForce RTX 4090
- NVIDIA GeForce RTX 4090
- available: True
- version: 12.4
* Lightning:
- lightning-utilities: 0.11.8
- pytorch-lightning: 2.4.0
- torch: 2.5.1
- torchmetrics: 1.4.3
- torchvision: 0.20.1
* Packages:
- aenum: 3.1.15
- aiohappyeyeballs: 2.4.3
- aiohttp: 3.10.10
- aiosignal: 1.3.1
- annotated-types: 0.7.0
- antlr4-python3-runtime: 4.9.3
- attrs: 24.2.0
- autocommand: 2.2.2
- backports.tarfile: 1.2.0
- certifi: 2024.8.30
- charset-normalizer: 3.4.0
- eval-type-backport: 0.2.0
- filelock: 3.16.1
- frozenlist: 1.5.0
- fsspec: 2024.10.0
- hydra-core: 1.3.2
- idna: 3.10
- importlib-metadata: 8.0.0
- importlib-resources: 6.4.0
- inflect: 7.3.1
- jaraco.collections: 5.1.0
- jaraco.context: 5.3.0
- jaraco.functools: 4.0.1
- jaraco.text: 3.12.1
- jinja2: 3.1.4
- lightly: 1.5.13
- lightning-utilities: 0.11.8
- markupsafe: 3.0.2
- more-itertools: 10.3.0
- mpmath: 1.3.0
- multidict: 6.1.0
- networkx: 3.4.2
- numpy: 2.1.3
- nvidia-cublas-cu12: 12.4.5.8
- nvidia-cuda-cupti-cu12: 12.4.127
- nvidia-cuda-nvrtc-cu12: 12.4.127
- nvidia-cuda-runtime-cu12: 12.4.127
- nvidia-cudnn-cu12: 9.1.0.70
- nvidia-cufft-cu12: 11.2.1.3
- nvidia-curand-cu12: 10.3.5.147
- nvidia-cusolver-cu12: 11.6.1.9
- nvidia-cusparse-cu12: 12.3.1.170
- nvidia-nccl-cu12: 2.21.5
- nvidia-nvjitlink-cu12: 12.4.127
- nvidia-nvtx-cu12: 12.4.127
- omegaconf: 2.3.0
- packaging: 24.1
- pillow: 11.0.0
- platformdirs: 4.2.2
- propcache: 0.2.0
- psutil: 6.1.0
- pyarrow: 18.0.0
- pydantic: 2.9.2
- pydantic-core: 2.23.4
- python-dateutil: 2.9.0.post0
- pytorch-lightning: 2.4.0
- pytz: 2024.2
- pyyaml: 6.0.2
- requests: 2.32.3
- setuptools: 75.3.0
- six: 1.16.0
- sympy: 1.13.1
- tomli: 2.0.1
- torch: 2.5.1
- torchmetrics: 1.4.3
- torchvision: 0.20.1
- tqdm: 4.66.6
- triton: 3.1.0
- typeguard: 4.3.0
- typing-extensions: 4.12.2
- urllib3: 2.2.3
- wheel: 0.43.0
- yarl: 1.17.1
- zipp: 3.19.2
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.12.3
- release: 6.8.0-38-generic
- version: #38-Ubuntu SMP PREEMPT_DYNAMIC Fri Jun 7 15:25:01 UTC 2024
</details>
### More info
_No response_
|
open
|
2024-11-04T16:19:56Z
|
2024-11-19T00:18:48Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/20391
|
[
"working as intended",
"ver: 2.4.x"
] |
guarin
| 1
|
taverntesting/tavern
|
pytest
| 24
|
Getting requests.exceptions.InvalidHeader:
|
In my test.tavern.yaml file
```
- name: Make sure signature is returned
request:
url: "{signature_url:s}"
method: PUT
headers:
content-type: application/json
content: {"ppddCode": "11","LIN": "123456789","correlationID":"{correlationId:s}","bodyTypeCode":"utv"}
token: "{sessionToken:s}"
response:
status_code: 200
save:
body:
signature: signature
```
I get the following error response:
```
E requests.exceptions.InvalidHeader: Value for header {content: {'ppddCode': '11', 'LIN': '123456789', 'correlationID': '99019c36-4f49-4be8-815c-9ff5cbcc14ff', 'bodyTypeCode': 'utv'}} must be of type str or bytes, not <class 'dict'>
venv/lib/python3.6/site-packages/requests/utils.py:872: InvalidHeader
```
How to convert my 'content' in my header to a dict? Or any other solution?
|
closed
|
2018-02-06T21:58:48Z
|
2018-02-07T17:22:38Z
|
https://github.com/taverntesting/tavern/issues/24
|
[] |
sridharaiyer
| 2
|
ansible/awx
|
django
| 15,775
|
How Do I Fail an Ansible Playbook Run or AWX Job when Hosts are Skipped?
|
### Please confirm the following
- [x] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [x] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [x] I understand that AWX is open source software provided for free and that I might not receive a timely response.
### Feature type
New Feature
### Feature Summary
Hi
How Do I Fail an Ansible Playbook Run or AWX Job when Hosts are Skipped?
Now my awx job is reported "Successful" but nothing occurred
Thanks
### Select the relevant components
- [x] UI
- [ ] API
- [x] Docs
- [ ] Collection
- [ ] CLI
- [ ] Other
### Steps to reproduce
Run a job with an offline host and finally job reported successful status with "skipping: no hosts matched"
### Current results
Job is currently "Successful" even if hosts are skipped:
skipping: no hosts matched
For Job:
Status: Successful
### Sugested feature result
Job failed when hosts are not known
"skipping: no hosts matched"
### Additional information
_No response_
|
closed
|
2025-01-27T08:36:31Z
|
2025-02-05T18:11:39Z
|
https://github.com/ansible/awx/issues/15775
|
[
"type:enhancement",
"needs_triage",
"community"
] |
bskou
| 1
|
keras-team/autokeras
|
tensorflow
| 896
|
How to specify the training batch_size?
|
When I train a StructuredDataClassifier model, I want to specify the training batch_size of input, should I specify the batch_size in fit function? I have try in fit function, but it seems not help.
|
closed
|
2020-01-13T09:04:56Z
|
2020-03-21T02:01:11Z
|
https://github.com/keras-team/autokeras/issues/896
|
[
"bug report",
"wontfix"
] |
Peng-wei-Yu
| 2
|
slackapi/bolt-python
|
fastapi
| 419
|
Load testing an app - is there a way to disable or pass authorization
|
I am trying to implement load-testing for my Django app that utilizes Bolt.
I want to stress test it using Locust framework - basically firing http requests to the server running the app to see the response times. The app does not really have to call any requests to Slack API for that – I am planning to patch `App.client` (not sure if it's possible, though) so it will just sleep a little imitating call to slack API.
I managed to sign requests properly, but the problem I am facing is that I cannot get past authorization middleware.
Whenever I use multi-workspace installation I always get `Please install this app into the workspace` response. I've also tried single-workspace setup, but I am getting a similar error.
I see the following in logs:
```
The stored bot token for enterprise_id: None team_id: T111 is no longer valid. (response: {'ok': False, 'error': 'invalid_auth'})
```
does Bolt try to communicate with Slack API to check the token?
If so, is there a way to disable it temporarily for test purposes?
I've tried passing `request_verification_enabled=False` and `token_verification_enabled=False` when initializing App, but it didn't change anything.
Could you please advise?
|
closed
|
2021-07-23T02:37:28Z
|
2021-08-06T09:50:54Z
|
https://github.com/slackapi/bolt-python/issues/419
|
[
"question"
] |
DataGreed
| 4
|
ansible/ansible
|
python
| 83,884
|
uri module encodes uploaded file in base64 but it can't be handled by the server
|
### Summary
I have the exact same issue as described in
https://github.com/ansible/ansible/issues/73621
Since that issue was closed and no more comments could be added.
So I have to create another issue to address it. Sorry for this.
I saw your comments in that issue. I agree base64 encoding is the standard way.
However, the server can't handle it and we have no way to change it
since we don't own that server. It expects the file in the raw format.
Could it be fixed or work around in any way?
### Issue Type
Bug Report
### Component Name
uri
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.11]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/cwhuang/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = /home/cwhuang/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.9 (main, Apr 17 2024, 00:00:00) [GCC 13.2.1 20240316 (Red Hat 13.2.1-7)] (/usr/bin/python3)
jinja version = 3.1.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
$ cat /etc/ansible/ansible.cfg
# Since Ansible 2.12 (core):
# To generate an example config file (a "disabled" one with all default settings, commented out):
# $ ansible-config init --disabled > ansible.cfg
#
# Also you can now have a more complete file by including existing plugins:
# ansible-config init --disabled -t all > ansible.cfg
# For previous versions of Ansible you can check for examples in the 'stable' branches of each version
# Note that this file was always incomplete and lagging changes to configuration settings
# for example, for 2.9: https://github.com/ansible/ansible/blob/stable-2.9/examples/ansible.cfg
```
### OS / Environment
Fedora 38
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible-playbook -l rs700 -t update -i ~/ansible/inventory.yaml ~/ansible/customer.yaml
- name: "Update {{ t }} of {{ inventory_hostname }}"
tags: update
ansible.builtin.uri:
url: "https://{{ bmc_addr }}/redfish/v1/UpdateService/upload"
method: POST
force_basic_auth: true
user: "{{ bmc_user }}"
password: "{{ bmc_password }}"
status_code: 202,200
body_format: form-multipart
body:
UpdateParameters:
mime_type: application/json
filename: "/tmp/update-bios.json"
OemParameters:
mime_type: application/json
filename: "/tmp/oem-bios.json"
UpdateFile:
filename: "~/{{ inventory_hostname }}/{{ hostvars[inventory_hostname][t]['image'] }}"
mime_type: application/octet-stream
encoding: none
text_form_field: value
validate_certs: false
timeout: 300
### Expected Results
The firmware file could be uploaded correctly and the server (an BMC) can handle it correctly.
### Actual Results
```console
fatal: [rs700]: FAILED! => {"changed": false, "connection": "close", "content_length": "643", "content_type": "application/json; charset=UTF-8", "date": "Fri, 30 Aug 2024 22:58:19 GMT", "elapsed": 6, "json": {"error": {"@Message.ExtendedInfo": [{"@odata.type": "#Message.v1_0_8.Message", "Message": "The file, UpdateParameters, submitted is a malformed JSON file and could not be parsed successfully by the receiving service.", "MessageArgs": ["UpdateParameters"], "MessageId": "UpdateService.1.0.MalformedJSONFile", "RelatedProperties": ["UpdateParameters"], "Resolution": "Ensure that the file sent is a proper valid JSON file and resubmit the request.", "Severity": "Critical"}], "code": "UpdateService.1.0.MalformedJSONFile", "message": "The file, UpdateParameters, submitted is a malformed JSON file and could not be parsed successfully by the receiving service."}}, "msg": "Status code was 400 and not [202, 200]: HTTP Error 400: Bad Request", "odata_version": "4.0", "redirected": false, "server": "AMI MegaRAC Redfish Service", "status": 400, "url": "https://10.10.95.111/redfish/v1/UpdateService/upload"}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
closed
|
2024-09-02T04:36:01Z
|
2024-09-17T13:00:02Z
|
https://github.com/ansible/ansible/issues/83884
|
[
"module",
"bug",
"affects_2.14"
] |
cwhuang
| 6
|
voila-dashboards/voila
|
jupyter
| 543
|
Too many ipyleaflet layers fail to render
|
I noticed that notebooks with too many ipyleaflet layers cannot be properly rendered with Voila. Those notebooks are rendered on mybinder.org, so this could be a problem of my computer. That being said, I'd be happy if notebooks that can be rendered with JupyterLab locally can be rendered with Voila as well, or at least Voila issues a warning saying some widgets are not rendered.
Below is a table of results with [a reproducible example](https://gist.github.com/yudai-nkt/dcdd9ffb354ab389ca077093cb57ef67) and GIFs capturing the results on my local end.
| | JupyterLab | Voila |
| --- | ---- | ---- |
| my end | success | fail |
| online (nbviewer or binder) | [success](https://nbviewer.jupyter.org/gist/yudai-nkt/dcdd9ffb354ab389ca077093cb57ef67/mwe.ipynb) | [success](https://mybinder.org/v2/gist/yudai-nkt/dcdd9ffb354ab389ca077093cb57ef67/master?urlpath=%2Fvoila%2Frender%2Fmwe.ipynb) |
**JupyterLab**:

**Voila**

This issue might be a duplicate of #534, but I'm afraid I'm not quite sure. Feel free to close if that's the case.
|
open
|
2020-02-15T19:32:37Z
|
2022-01-20T06:07:12Z
|
https://github.com/voila-dashboards/voila/issues/543
|
[] |
yudai-nkt
| 4
|
wagtail/wagtail
|
django
| 12,703
|
Stop sidebar re-rendering when clicking the page
|
Spotted as part of the [Wagtail 6.3 admin UI performance audit](https://github.com/wagtail/wagtail/discussions/12578). Our sidebar components are unnecessarily listening to click events on the whole page, most likely as part of their "click outside" logic.
We should either remove this unneeded event listening, or if it has to be kept, make sure it "exits early" rather than running React’s rendering logic every time.
Spotted with the React Developer Tools’ performance marks. Sample of the flamegraph:

|
closed
|
2024-12-17T11:08:11Z
|
2025-01-20T07:34:00Z
|
https://github.com/wagtail/wagtail/issues/12703
|
[
"type:Cleanup/Optimisation",
"🚀 Performance",
"component:Design system"
] |
thibaudcolas
| 0
|
kizniche/Mycodo
|
automation
| 501
|
DHT22 error
|
## Mycodo Issue Report:
- Specific Mycodo Version: 6.1.4
Raspberry PI 3B+
16 GB SD Card Class10
System : Linux mycodo 4.14.52-v7+ #1123 SMP Wed Jun 27 17:35:49 BST 2018 armv7l GNU/Linux
Power 5v 2.0 Amp
Sensor : DHT22 ( 3 meters ) , DS18B20 , Original Atlas PH i2c
I've already tried everything.
other pins, more power, solder tested.
The DHT22 runs fine, but only for a few hours. After that, the Raspberry must be taken off and restarted.
Restart via the console is not possible.
Raspberry must be taken off the mains.
After the restart, everything runs again for a few hours
sorry my bad English !
#### Problem Description
Please list:
DHT 22Humidity sensor always fails after 2-5 hours
### Errors from Daemon Log
2018-07-10 17:05:08,952 - mycodo.output - WARNING - Cannot turn on Output with ID 0. It doesn't exist
2018-07-10 17:05:16,201 - mycodo.input_d477bba3 - ERROR - StopIteration raised. Possibly could not read input. Ensure it's connected properly and detected.
2018-07-10 17:05:26,845 - mycodo.inputs.dht22_3 - INFO - Turning off sensor
2018-07-10 17:05:26,851 - mycodo.output - WARNING - Cannot turn off Output with ID 0. It doesn't exist
2018-07-10 17:05:29,854 - mycodo.inputs.dht22_3 - INFO - Turning on sensor
2018-07-10 17:05:29,858 - mycodo.output - WARNING - Cannot turn on Output with ID 0. It doesn't exist
2018-07-10 17:05:47,748 - mycodo.inputs.dht22_3 - INFO - Turning off sensor
2018-07-10 17:05:47,754 - mycodo.output - WARNING - Cannot turn off Output with ID 0. It doesn't exist
2018-07-10 17:05:50,759 - mycodo.inputs.dht22_3 - INFO - Turning on sensor
2018-07-10 17:05:50,763 - mycodo.output - WARNING - Cannot turn on Output with ID 0. It doesn't exist
2018-07-10 17:06:08,662 - mycodo.inputs.dht22_3 - INFO - Turning off sensor
2018-07-10 17:06:08,668 - mycodo.output - WARNING - Cannot turn off Output with ID 0. It doesn't exist
2018-07-10 17:06:11,672 - mycodo.inputs.dht22_3 - INFO - Turning on sensor
2018-07-10 17:06:11,678 - mycodo.output - WARNING - Cannot turn on Output with ID 0. It doesn't exist
2018-07-10 17:06:18,926 - mycodo.input_d477bba3 - ERROR - StopIteration raised. Possibly could not read input. Ensure it's connected properly and detected.
**from mycodo.log**
2018-07-09 06:26:36,494 - mycodo.output - WARNING - Cannot turn on Output with ID 0. It doesn't exist
2018-07-09 06:26:38,500 - mycodo.inputs.dht22_3 - ERROR - Could not initialize sensor. Check if it's connected properly and pigpiod is running. Error: unpack requires a bytes object of length 16
2018-07-09 06:17:21,688 - mycodo.daemon - INFO - Mycodo daemon v6.1.4 starting
**syslog**
Jul 10 18:24:27 easygrow systemd[1]: Reloading.
Jul 10 18:24:28 easygrow systemd[1]: Stopping pigpiod service (low latency)...
Jul 10 18:24:28 easygrow systemd[1]: Stopped pigpiod service (low latency).
Jul 10 18:24:28 easygrow systemd[1]: Starting pigpiod service (low latency)...
Jul 10 18:24:28 easygrow systemd[1]: Started pigpiod service (low latency).
Jul 10 18:24:28 easygrow pigpiod[30721]: 2018-07-10 18:24:28 initInitialise: bind to port 8888 failed (Address already in use)
Jul 10 18:24:28 easygrow pigpiod[30721]: Can't initialise pigpio library
Jul 10 18:24:28 easygrow systemd[1]: pigpiod_low.service: Main process exited, code=exited, status=1/FAILURE
Jul 10 18:24:28 easygrow killall[30727]: pigpiod: Kein Prozess gefunden
Jul 10 18:24:28 easygrow systemd[1]: pigpiod_low.service: Control process exited, code=exited status=1
Jul 10 18:24:28 easygrow systemd[1]: pigpiod_low.service: Unit entered failed state.
Jul 10 18:24:28 easygrow systemd[1]: pigpiod_low.service: Failed with result 'exit-code'.
Could you please help me please !
|
closed
|
2018-07-10T15:32:01Z
|
2018-10-13T14:47:37Z
|
https://github.com/kizniche/Mycodo/issues/501
|
[] |
pmunz75
| 5
|
gradio-app/gradio
|
data-visualization
| 9,876
|
Parameter passing of button.click()
|
### Describe the bug
When using `button.click(fn=..., inputs=[],...)`, if the input parameter is a button component, the type of the component will change after being passed to the fn target function.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
def setup_feedback_buttons(like_btn: gr.Button, dislike_btn: gr.Button):
print(f"Before setting visible: like_btn.visible = {like_btn.visible}, dislike_btn.visible = {dislike_btn.visible}")
like_btn.visible = True
dislike_btn.visible = True
with gr.Blocks() as demo:
like_btn = gr.Button("Like", visible=False)
dislike_btn = gr.Button("Dislike", visible=False)
submit_btn = gr.Button("Submit")
submit_btn.click(fn=setup_feedback_buttons, inputs=[like_btn, dislike_btn], outputs=None)
demo.launch(debug=True)
```
### Screenshot
_No response_
### Logs
```shell
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/gradio/queueing.py", line 536, in process_events
response = await route_utils.call_process_api(
File "/usr/local/lib/python3.10/dist-packages/gradio/route_utils.py", line 322, in call_process_api
output = await app.get_blocks().process_api(
File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1935, in process_api
result = await self.call_function(
File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1520, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "/usr/local/lib/python3.10/dist-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
File "/usr/local/lib/python3.10/dist-packages/gradio/utils.py", line 826, in wrapper
response = f(*args, **kwargs)
File "<ipython-input-4-ce9f8f625052>", line 4, in setup_feedback_buttons
print(f"Before setting visible: like_btn.visible = {like_btn.visible}, dislike_btn.visible = {dislike_btn.visible}")
AttributeError: 'str' object has no attribute 'visible'
```
### System Info
```shell
Package Version
------------------ -----------
aiofiles 23.2.1
annotated-types 0.7.0
anyio 4.6.2.post1
Brotli 1.1.0
certifi 2024.8.30
cffi 1.17.1
charset-normalizer 3.4.0
click 8.1.7
colorama 0.4.6
contourpy 1.3.0
cycler 0.12.1
dnspython 2.7.0
email_validator 2.2.0
exceptiongroup 1.2.2
fastapi 0.115.4
fastapi-cli 0.0.5
ffmpy 0.3.0
filelock 3.16.1
fonttools 4.54.1
fsspec 2024.10.0
gradio 5.1.0
gradio_client 1.4.0
h11 0.14.0
h2 4.1.0
hpack 4.0.0
httpcore 1.0.6
httptools 0.6.1
httpx 0.27.2
huggingface_hub 0.26.2
hyperframe 6.0.1
idna 3.10
Jinja2 3.1.4
kiwisolver 1.4.7
markdown-it-py 3.0.0
MarkupSafe 2.1.5
matplotlib 3.9.2
mdurl 0.1.2
numpy 2.1.2
orjson 3.10.10
packaging 24.1
pandas 2.2.3
pillow 10.2.0
pip 24.3.1
pycparser 2.22
pydantic 2.0.3
pydantic_core 2.3.0
pydub 0.25.1
Pygments 2.18.0
pyparsing 3.2.0
PySocks 1.7.1
python-dateutil 2.9.0
python-dotenv 1.0.1
python-multipart 0.0.16
pytz 2024.1
PyYAML 6.0.2
requests 2.32.3
rich 13.9.3
ruff 0.7.1
semantic-version 2.10.0
setuptools 75.1.0
shellingham 1.5.4
six 1.16.0
sniffio 1.3.1
starlette 0.41.2
tomlkit 0.12.0
tqdm 4.66.6
typer 0.12.5
typer-slim 0.12.5
typing_extensions 4.12.2
tzdata 2024.2
unicodedata2 15.1.0
urllib3 2.2.3
uvicorn 0.32.0
uvloop 0.21.0
watchfiles 0.24.0
websockets 12.0
wheel 0.44.0
zstandard 0.23.0
I tried using multiple different versions of python(3.10,3.11,3.12,3.13) and gradio(4.44,4.44.1,5.4) on Ubuntu 18.04 and Ubuntu 22.04, and all got the same error
```
### Severity
I can work around it
|
closed
|
2024-10-31T08:29:56Z
|
2024-11-01T01:43:12Z
|
https://github.com/gradio-app/gradio/issues/9876
|
[
"bug"
] |
Semper4u
| 3
|
tqdm/tqdm
|
pandas
| 953
|
Provide a default computer unit handling (IEEE1541)
|
```
4.45.0
3.7.5 (default, Oct 31 2019, 15:18:51) [MSC v.1916 64 bit (AMD64)]
win32 ## (but its a win10 x64 machine)
```
In the issue https://github.com/tqdm/tqdm/issues/952 I gave the following use case:
with TQDM 4.45.0, when I write some downloading code, I try to use IEEE1541 units.
So, I wrote this:
```
with tqdm(total=total_size, unit_scale=True, unit_divisor=1024, unit="iB") as pbar, open(file_path, "wb") as f:
pbar.set_description("Download {}".format(file_data["full_url"]))
def cb(data):
l = len(data)
pbar.update(l)
f.write(data)
```
We could define a `unit_scale="binary"` or `unit_scale="IEEE_1541"` that would treat all counters as in `iB` and adapt the accordingly `unit`, `unit divisor` and of course the K/M/G/... scale prefixes.
I believe this approach could echo with other issues such as https://github.com/tqdm/tqdm/issues/825 ; https://github.com/tqdm/tqdm/issues/807 ; https://github.com/tqdm/tqdm/issues/646 ; and I'm pretty sure an issue exists concerning iterations over streams with sizes not multiple of the scaling factor but can't find it anymore.
|
open
|
2020-04-28T16:44:09Z
|
2020-10-27T18:38:44Z
|
https://github.com/tqdm/tqdm/issues/953
|
[
"p4-enhancement-future 🧨"
] |
LoneWanderer-GH
| 3
|
ydataai/ydata-profiling
|
jupyter
| 995
|
cannot import name 'soft_unicode' from 'markupsafe'
|
### Current Behaviour
Used colab with 3.2.0
```
!pip install pandas-profiling==3.2.0
import numpy as np
import pandas as pd
from pandas_profiling import ProfileReport
df = pd.DataFrame(np.random.rand(100, 5), columns=["a", "b", "c", "d", "e"])
```
it shows
ImportError: cannot import name 'soft_unicode' from 'markupsafe' (/usr/local/lib/python3.7/dist-packages/markupsafe/__init__.py)
### Expected Behaviour
no error
### Data Description
None
### Code that reproduces the bug
```Python
!pip install pandas-profiling==3.2.0
import numpy as np
import pandas as pd
from pandas_profiling import ProfileReport
df = pd.DataFrame(np.random.rand(100, 5), columns=["a", "b", "c", "d", "e"])
```
### pandas-profiling version
2.3.0
### Dependencies
```Text
markupsafe==2.0.1
```
### OS
Mac
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html).
|
closed
|
2022-06-03T16:18:44Z
|
2022-09-30T18:39:05Z
|
https://github.com/ydataai/ydata-profiling/issues/995
|
[
"bug 🐛",
"code quality 📈"
] |
DaiZack
| 4
|
explosion/spaCy
|
machine-learning
| 13,533
|
[W030] Some entities could not be aligned in the text
|
Hi!
I tried training a custom Named Entity Recognition model using spaCy, but despite multiple trials, I get a warning telling me that there are misaligned entities in the training data that I had created.
```
import spacy
from spacy.training import Example
import random
nlp=spacy.load('en_core_web_sm')
training_data=[
("Hello from India", {""entities"": [(11, 15, ""GPE"")]})
]
other_pipes = [pipe for pipe in nlp.pipe_names if pipe != 'ner']
nlp.disable_pipes(*other_pipes)
optimizer=nlp.create_optimizer()
losses={}
for i in range(10): #10 is the epoch value
random.shuffle(training_data)
for text, annotation in training_data:
doc = nlp.make_doc(text)
example = Example.from_dict(doc, annotation)
nlp.update([example], sgd = optimizer, losses=losses)
```
And the error generated is this. :
```
Warning (from warnings module):
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/spacy/training/iob_utils.py", line 141
warnings.warn(
UserWarning: [W030] Some entities could not be aligned in the text "Hello from India" with entities "[(11, 15, 'GPE')]". Use `spacy.training.offsets_to_biluo_tags(nlp.make_doc(text), entities)` to check the alignment. Misaligned entities ('-') will be ignored during training.
```
The entity "India" starts from index 11 and ends at 15, yet spaCy doesn't recognise that it's a token. Any help is appreciated.
* Operating System: MacOS Ventura
* Python Version Used: 3.9
* spaCy Version Used: 3.1.0
|
closed
|
2024-06-19T07:08:40Z
|
2024-06-19T09:15:37Z
|
https://github.com/explosion/spaCy/issues/13533
|
[
"usage"
] |
NitGS
| 1
|
plotly/dash-table
|
dash
| 387
|
Incorrect Filtering of Numeric Data with Decimal Points
|
Hello,
I have a pandas dataframe of data that I am displaying in a data-table with filtering enabled. Filtering seems to work okay on numeric columns that are whole numbers, but when I try to filter decimal point numbers it seems to ignore the values after the decimal.
ex - `eq num(1.5)` filters to rows with column value equal to 1
`> num(0.5)` filters to all rows with column value greater than 0
I checked this on both Vivaldi and Firefox. Here's code for an example.
```
import dash
import dash_table
import dash_html_components as html
import pandas as pd
import numpy as np
drain = [0,0.2,0.6,1,1.5]
gate = [0,0.2,0.6,1,1.5]
drive = [0,0.2,0.6,1,1.5]
df = pd.DataFrame({
'Drain Bias': np.round(pd.to_numeric(drain), 3),
'Gate Bias': np.round(pd.to_numeric(gate), 3),
'Drive Bias': np.round(pd.to_numeric(drive), 3)
})
df.info()
external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']
app = dash.Dash(__name__, external_stylesheets=external_stylesheets)
app.layout = html.Div(children=[
dash_table.DataTable(
id='device-table',
css=[{
'selector': '.dash-cell div.dash-cell-value',
'rule': 'display: inline; white-space: inherit; overflow: inherit; text-overflow: inherit;'
}],
columns=[{"name": i, "id": i} for i in df.columns],
data=df.to_dict("rows"),
filtering=True,
editable=False,
sorting=True,
sorting_type="multi",
row_selectable="multi",
selected_rows=[],
n_fixed_rows=2,
style_cell={'width': '80px'},
style_table={
'maxHeight': '300',
'overflowY': 'scroll'
},
),
])
if __name__ == '__main__':
app.run_server(debug=True)
```
|
closed
|
2019-02-27T14:05:17Z
|
2019-02-27T17:29:06Z
|
https://github.com/plotly/dash-table/issues/387
|
[
"dash-type-bug"
] |
JAnderson419
| 1
|
strawberry-graphql/strawberry
|
django
| 3,264
|
Incongruity with the `Context` type across the library.
|
I'm developing an app using Django + Strawberry + Channels.
Until the introduction of Channels and related logic to manage subscriptions, I used a custom decorator to populate the `info.context` object of some selected queries and mutations with some extra stuff.
To be clear, something like this:
```py
from functools import wraps
def my_decorator(funct):
@wraps(funct)
def my_wrapper(info, *args, **kwargs):
info.context.my_property = get_some_extra_property()
return funct(info, *args, **kwargs)
return my_wrapper
```
It has always worked fine since the `context` is a [`StrawberryDjangoContext`](https://github.com/strawberry-graphql/strawberry/blob/a90dea2a3053e857aa2adf685d5c4f5d39c527c4/strawberry/django/context.py#L10-L22) dataclass object.
---
Now...
After introducing support for subscriptions, I ran into an exception: it seems that the `context` instantiated by [`GraphQLView`](https://github.com/strawberry-graphql/strawberry/blob/a90dea2a3053e857aa2adf685d5c4f5d39c527c4/strawberry/django/views.py#L198-L199) is actually different from the one instantiated by [`GraphQLHTTPConsumer`](https://github.com/strawberry-graphql/strawberry/blob/main/strawberry/channels/handlers/http_handler.py#L239-L245)...
Even though they both only expose 2 properties: `request` & `response`.
The second one -unlike the first one- instantiates a simple `dict` and this led the application to raise a:
```
AttributeError: 'dict' object has no attribute 'my_property'.
```
---
To be honest, I've already fixed my application so that's ok but this difference triggers my OCDs, IMHO isn't really a good thing for a library and it creates user confusion and that's why I opened this Issue.
Make your own consideration and choose whatever approach you like... It's just a simple report.
|
open
|
2023-11-29T22:54:54Z
|
2025-03-20T15:56:29Z
|
https://github.com/strawberry-graphql/strawberry/issues/3264
|
[] |
Byloth
| 2
|
tqdm/tqdm
|
pandas
| 786
|
Suggestion: optionally redirect console logging to tqdm.write
|
This is somewhat related to #296, which is about stdout and stderr.
I believe the proposed example to redirect stdout and stderr doesn't work with logging, because it will already have saved the reference to stdout and stderr.
This seems to be a common "problem" with many related snippets. Rather than copy and pasting an example, it might be worth to have an option include with `tqdm`?
Here is one possible solution:
```python
import logging
import sys
from contextlib import contextmanager
from typing import List
from tqdm import tqdm
class TqdmLoggingHandler(logging.StreamHandler):
def emit(self, record):
try:
msg = self.format(record)
tqdm.write(msg)
self.flush()
except (KeyboardInterrupt, SystemExit):
raise
except: # noqa pylint: disable=bare-except
self.handleError(record)
def _is_console_logging_handler(handler: logging.Handler) -> bool:
return isinstance(handler, logging.StreamHandler) and handler.stream in {sys.stdout, sys.stderr}
def _get_console_formatter(handlers: List[logging.Handler]) -> logging.Formatter:
for handler in handlers:
if _is_console_logging_handler(handler):
return handler.formatter
return None
@contextmanager
def redirect_logging_to_tqdm(logger: logging.Logger = None):
if logger is None:
logger = logging.root
tqdm_handler = TqdmLoggingHandler()
original_handlers = logger.handlers
tqdm_handler.setFormatter(_get_console_formatter(original_handlers))
try:
logger.handlers = [
handler
for handler in logger.handlers
if not _is_console_logging_handler(handler)
] + [tqdm_handler]
yield
finally:
logger.handlers = original_handlers
@contextmanager
def tqdm_with_logging_redirect(*args, logger: logging.Logger = None, **kwargs):
with tqdm(*args, **kwargs) as pbar:
with redirect_logging_to_tqdm(logger=logger):
yield pbar
```
And it could be used like this:
```python
import logging
from <source package> import tqdm_with_logging_redirect
LOGGER = logging.getLogger(__name__)
if __name__ == '__main__':
logging.basicConfig(level='INFO')
file_list = ['file1', 'file2']
with tqdm_with_logging_redirect(total=len(file_list)) as pbar:
# logging to the console is now redirected to tqdm
for filename in file_list:
LOGGER.info('processing file: %s', filename)
pbar.update(1)
# logging is now restored
```
This is just an example.
(I could also provide tests for the above implementation if it's of any use)
|
closed
|
2019-08-02T14:39:49Z
|
2021-04-06T00:21:40Z
|
https://github.com/tqdm/tqdm/issues/786
|
[
"p3-enhancement 🔥",
"submodule ⊂",
"to-merge ↰",
"c1-quick 🕐"
] |
de-code
| 4
|
matterport/Mask_RCNN
|
tensorflow
| 2,677
|
OSError: Unable to open file
|
Hi I'm unable to run training session. I'm using tensor flow 2.3 and modified model.py, utils.py and other files from some available accordingly. Any help would be highly appreciated
'' '' ''
OSError Traceback (most recent call last)
<ipython-input-15-ce3d0a98ce21> in <module>
10 model.load_weights('Users/20013819/Documents/boundary detection/Mask_RCNN/mask_rcnn_coco.h5', by_name=True,
11 exclude=["mrcnn_class_logits", "mrcnn_bbox_fc",
---> 12 "mrcnn_bbox", "mrcnn_mask"])
13 elif init_with == "last":
14 # Load the last model you trained and continue training
~\Documents\boundary detection\Mask_RCNN\mrcnn\model.py in load_weights(self, filepath, by_name, exclude)
2113 if h5py is None:
2114 raise ImportError('`load_weights` requires h5py.')
-> 2115 with h5py.File(filepath, mode='r') as f:
2116 if 'layer_names' not in f.attrs and 'model_weights' in f:
2117 f = f['model_weights']
c:\users\20013819\anaconda3\envs\tensorflow\lib\site-packages\h5py\_hl\files.py in __init__(self, name, mode, driver, libver, userblock_size, swmr, rdcc_nslots, rdcc_nbytes, rdcc_w0, track_order, **kwds)
406 fid = make_fid(name, mode, userblock_size,
407 fapl, fcpl=make_fcpl(track_order=track_order),
--> 408 swmr=swmr)
409
410 if isinstance(libver, tuple):
c:\users\20013819\anaconda3\envs\tensorflow\lib\site-packages\h5py\_hl\files.py in make_fid(name, mode, userblock_size, fapl, fcpl, swmr)
171 if swmr and swmr_support:
172 flags |= h5f.ACC_SWMR_READ
--> 173 fid = h5f.open(name, flags, fapl=fapl)
174 elif mode == 'r+':
175 fid = h5f.open(name, h5f.ACC_RDWR, fapl=fapl)
h5py\_objects.pyx in h5py._objects.with_phil.wrapper()
h5py\_objects.pyx in h5py._objects.with_phil.wrapper()
h5py\h5f.pyx in h5py.h5f.open()
OSError: Unable to open file (unable to open file: name = 'Users/20013819/Documents/boundary detection/Mask_RCNN/mask_rcnn_coco.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
'' '' ''
|
closed
|
2021-08-23T05:47:04Z
|
2021-08-23T11:29:54Z
|
https://github.com/matterport/Mask_RCNN/issues/2677
|
[] |
chhigansharma
| 2
|
ray-project/ray
|
data-science
| 51,387
|
[Train] Add support for NeMo Megatron strategy with lightning
|
### Description
Similar to the [deepspeed lightning strategy with ray](https://docs.ray.io/en/latest/train/api/doc/ray.train.lightning.RayDeepSpeedStrategy.html#ray.train.lightning.RayDeepSpeedStrategy) I would like to integrate with the NeMo framework to use ray to manage feeding data to and orchestrating MegatronStrategy training with NeMo.
### Use case
Support TP/PP/CP parallelism techniques using [MegatronStrategy in Nemo](https://docs.nvidia.com/nemo-framework/user-guide/24.09/nemo-2.0/features/megatron.html#megatronstrategy).
|
open
|
2025-03-14T21:52:34Z
|
2025-03-18T17:12:06Z
|
https://github.com/ray-project/ray/issues/51387
|
[
"enhancement",
"triage",
"train"
] |
sam-h-bean
| 0
|
onnx/onnx
|
tensorflow
| 6,302
|
a) Feature Request: Function sample_dirichlet, b) State of probabilistic model support?
|
I am very interested in converting DeepLearning models, that contain the PowerSpherical function (https://github.com/andife/power_spherical) to ONNX.
Currently this fails because of the Dirichlet function (https://github.com/pytorch/pytorch/issues/116336).
After my research, I came across https://github.com/onnx/onnxmltools/issues/549, among others
and wondered whether it would be useful to have gamma, dirichlet, beta available in general?
For this reason, the question arises as to what the current state of probabilistic model support looks like?
Dirichlet is available in
pytorch: https://pytorch.org/docs/stable/distributions.html#dirichlet
tensorflow: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/distributions/dirichlet_multinomial.py
Would it be a suitable direction, e.g. to create a sample Dirichlet method as an Onnx function based on RandomUniform (https://onnx.ai/onnx/operators/onnx__RandomUniform.html#l-onnx-doc-randomuniform)?
|
open
|
2024-08-17T04:38:43Z
|
2024-09-30T21:38:34Z
|
https://github.com/onnx/onnx/issues/6302
|
[] |
andife
| 6
|
yezz123/authx
|
pydantic
| 626
|
🐛 Investigate TypeError: `coroutine` object is not callable
|
https://github.com/yezz123/authx/actions/runs/9650681836/job/26616841558?pr=625
|
closed
|
2024-06-24T18:47:04Z
|
2024-06-30T15:05:39Z
|
https://github.com/yezz123/authx/issues/626
|
[
"bug"
] |
yezz123
| 0
|
python-gitlab/python-gitlab
|
api
| 2,485
|
Documentation wrong on page for "badges"
|
## Description of the problem, including code/CLI snippet
Documentation seems to be wrong here:
https://python-gitlab.readthedocs.io/en/stable/gl_objects/badges.html#examples
-> "Update a badge"
## Expected Behavior
Update a badge:
badge.image_url = new_image_url
badge.link_url = new_link_url
badge.save()
## Actual Behavior
Update a badge:
badge.image_link = new_link
badge.save()
## Specifications
- python-gitlab version: documentation issue only
- API version you are using: documentation issue only
- Gitlab server version: documentation issue only
|
closed
|
2023-02-08T19:00:37Z
|
2024-02-12T01:12:14Z
|
https://github.com/python-gitlab/python-gitlab/issues/2485
|
[
"docs",
"good first issue"
] |
stuff2use
| 3
|
custom-components/pyscript
|
jupyter
| 46
|
Feature Request: access to sunset/sunrise in function
|
This works, but it's complicated to write and follow:
```python
@state_trigger('binary_sensor.dark == "on"')
@time_active("range(sunrise - 120min, sunset + 120min)")
def turn_on_dark():
turn_on()
@state_trigger('binary_sensor.downstairs_occupied == "on" and binary_sensor.dark == "on"')
def turn_on_occupied():
turn_on()
@state_trigger('binary_sensor.downstairs_occupied == "off"')
@time_active("range(sunset + 120min, sunrise - 120min)")
def turn_off_notoccupied():
turn_off()
@state_trigger('binary_sensor.dark == "off"')
def turn_off_notdark():
turn_off()
@time_trigger('startup')
@state_active('binary_sensor.dark == "on"')
@time_active("range(sunrise - 120min, sunset + 120min)")
def turn_on_startup_dark():
turn_on()
@time_trigger('startup')
@state_active('binary_sensor.downstairs_occupied == "on" and binary_sensor.dark == "on"')
def turn_on_startup_occupied():
turn_on()
@time_trigger('startup')
@state_active('binary_sensor.downstairs_occupied == "off"')
@time_active("range(sunset + 120min, sunrise - 120min)")
def turn_off_startup_notoccupied():
turn_off()
@time_trigger('startup')
@state_active('binary_sensor.dark == "off"')
def turn_off_startup_notdark():
turn_off()
```
I would prefer this:
```python
import datetime
@time_trigger('startup')
@state_trigger('True or binary_sensor.dark or binary_sensor.downstairs_occupied')
def set_lights():
today = datetime.datetime.today()
start_time = today.replace(hour=5, minute=0) # would like sunrise - 120min
end_time = today.replace(hour=22, minute=0) # would like sunset + 120min
if binary_sensor.dark == "off":
turn_off()
return
if start_time < datetime.datetime.now() < end_time:
turn_on()
return
if binary_sensor.downstairs_occupied == "on":
turn_on()
return
turn_off()
```
However, as you can see in the code comments, there's not an easy way to get to sunrise/sunset offsets.
|
closed
|
2020-10-16T11:34:32Z
|
2023-04-10T07:40:06Z
|
https://github.com/custom-components/pyscript/issues/46
|
[] |
dlashua
| 8
|
sigmavirus24/github3.py
|
rest-api
| 723
|
multiple test failures on develop branch
|
I'm at a bit of a loss as to why the unit tests are working under travis as I'm unable to to get a clean run locally. Eg.:
```
[develop] ~/github/github3.py $ virtualenv venv
New python executable in /home/jhoblitt/github/github3.py/venv/bin/python2
Also creating executable in /home/jhoblitt/github/github3.py/venv/bin/python
Installing setuptools, pip, wheel...done.
[develop] ~/github/github3.py $ . venv/bin/activate
[develop] ~/github/github3.py $ export TRAVIS_GH3="True"
[develop] ~/github/github3.py $ export TOXENV=py27
[develop] ~/github/github3.py $ export REQUESTS_VERSION="===2.0.1"
[develop] ~/github/github3.py $ python --version
Python 2.7.13
[develop] ~/github/github3.py $ pip install tox
Collecting tox
Using cached tox-2.7.0-py2.py3-none-any.whl
Collecting virtualenv>=1.11.2; python_version != "3.2" (from tox)
Using cached virtualenv-15.1.0-py2.py3-none-any.whl
Collecting pluggy<1.0,>=0.3.0 (from tox)
Using cached pluggy-0.4.0-py2.py3-none-any.whl
Collecting py>=1.4.17 (from tox)
Using cached py-1.4.34-py2.py3-none-any.whl
Installing collected packages: virtualenv, pluggy, py, tox
Successfully installed pluggy-0.4.0 py-1.4.34 tox-2.7.0 virtualenv-15.1.0
[develop] ~/github/github3.py $ tox
GLOB sdist-make: /home/jhoblitt/github/github3.py/setup.py
py27 inst-nodeps: /home/jhoblitt/github/github3.py/.tox/dist/github3.py-1.0.0a4.zip
py27 installed: betamax==0.8.0,betamax-matchers==0.4.0,github3.py==1.0.0a4,linecache2==1.0.0,mock==1.0.1,py==1.4.34,pytest==3.1.3,requests==2.0.1,requests-toolbelt==0.8.0,six==1.10.0,traceback2==1.4.0,unittest2==1.1.0,uritemplate==3.0.0
py27 runtests: PYTHONHASHSEED='1147969665'
py27 runtests: commands[0] | py.test
.................................................................................................................X................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................x........................................................................................................................................................................................................................................................F...................s.sF....E..FF.F...........EEEEEEEEEEEF....
================================================================================== ERRORS ===================================================================================
______________________________________________________________________ ERROR at setup of test_findable ______________________________________________________________________
def setup_module():
> build_wheel()
venv/lib/python2.7/site-packages/wheel/test/test_basic.py:33:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
venv/lib/python2.7/site-packages/wheel/test/test_basic.py:44: in build_wheel
exec(compile(open('setup.py').read(), 'setup.py', 'exec'))
setup.py:26: in <module>
kwargs['tests_require'] = ['betamax >=0.2.0', 'pytest',
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
attrs = {'author': 'Illustrious Author', 'author_email': 'illustrious@example.org', 'description': 'Another testing distribution ☃', 'entry_points': {'console_scripts': ['complex-dist=complexdist:main', 'complex-dist2=complexdist:main']}, ...}
klass = <class setuptools.dist.Distribution at 0x7f8c38ce0e20>, dist = <setuptools.dist.Distribution instance at 0x7f8c361e9050>, ok = 1
exc = IOError(2, 'No such file or directory')
def setup(**attrs):
"""The gateway to the Distutils: do everything your setup script needs
to do, in a highly flexible and user-driven way. Briefly: create a
Distribution instance; find and parse config files; parse the command
line; run each Distutils command found there, customized by the options
supplied to 'setup()' (as keyword arguments), in config files, and on
the command line.
The Distribution instance might be an instance of a class supplied via
the 'distclass' keyword argument to 'setup'; if no such class is
supplied, then the Distribution class (in dist.py) is instantiated.
All other arguments to 'setup' (except for 'cmdclass') are used to set
attributes of the Distribution instance.
The 'cmdclass' argument, if supplied, is a dictionary mapping command
names to command classes. Each command encountered on the command line
will be turned into a command class, which is in turn instantiated; any
class found in 'cmdclass' is used in place of the default, which is
(for command 'foo_bar') class 'foo_bar' in module
'distutils.command.foo_bar'. The command class must provide a
'user_options' attribute which is a list of option specifiers for
'distutils.fancy_getopt'. Any command-line options between the current
and the next command are used to set attributes of the current command
object.
When the entire command-line has been successfully parsed, calls the
'run()' method on each command object in turn. This method will be
driven entirely by the Distribution object (which each command object
has a reference to, thanks to its constructor), and the
command-specific options that became attributes of each command
object.
"""
global _setup_stop_after, _setup_distribution
# Determine the distribution class -- either caller-supplied or
# our Distribution (see below).
klass = attrs.get('distclass')
if klass:
del attrs['distclass']
else:
klass = Distribution
if 'script_name' not in attrs:
attrs['script_name'] = os.path.basename(sys.argv[0])
if 'script_args' not in attrs:
attrs['script_args'] = sys.argv[1:]
# Create the Distribution instance, using the remaining arguments
# (ie. everything except distclass) to initialize it
try:
_setup_distribution = dist = klass(attrs)
except DistutilsSetupError, msg:
if 'name' in attrs:
raise SystemExit, "error in %s setup command: %s" % \
(attrs['name'], msg)
else:
raise SystemExit, "error in setup command: %s" % msg
if _setup_stop_after == "init":
return dist
# Find and parse the config file(s): they will override options from
# the setup script, but be overridden by the command line.
dist.parse_config_files()
if DEBUG:
print "options (after parsing config files):"
dist.dump_option_dicts()
if _setup_stop_after == "config":
return dist
# Parse the command line and override config files; any
# command-line errors are the end user's fault, so turn them into
# SystemExit to suppress tracebacks.
try:
ok = dist.parse_command_line()
except DistutilsArgError, msg:
raise SystemExit, gen_usage(dist.script_name) + "\nerror: %s" % msg
if DEBUG:
print "options (after parsing command line):"
dist.dump_option_dicts()
if _setup_stop_after == "commandline":
return dist
# And finally, run all the commands found on the command line.
if ok:
try:
dist.run_commands()
except KeyboardInterrupt:
raise SystemExit, "interrupted"
except (IOError, os.error), exc:
if DEBUG:
sys.stderr.write("error: %s\n" % (exc,))
raise
else:
> raise SystemExit, "error: %s" % (exc,)
E SystemExit: error: [Errno 2] No such file or directory: 'build/lib/complexdist/__init__.py'
/usr/lib64/python2.7/distutils/core.py:159: SystemExit
--------------------------------------------------------------------------- Captured stdout setup ---------------------------------------------------------------------------
running bdist_wheel
running build
running build_py
copying complexdist/__init__.py -> build/lib/complexdist
____________________________________________________________________ ERROR at setup of test_default_tag _____________________________________________________________________
file /home/jhoblitt/github/github3.py/venv/lib/python2.7/site-packages/wheel/test/test_tagopt.py, line 47
def test_default_tag(temp_pkg):
E fixture 'temp_pkg' not found
> available fixtures: betamax_parametrized_recorder, betamax_parametrized_session, betamax_recorder, betamax_session, cache, capfd, capsys, doctest_namespace, monkeypatch, pytestconfig, record_xml_property, recwarn, tmpdir, tmpdir_factory
> use 'pytest --fixtures [testpath]' for help on them.
/home/jhoblitt/github/github3.py/venv/lib/python2.7/site-packages/wheel/test/test_tagopt.py:47
____________________________________________________________________ ERROR at setup of test_explicit_tag ____________________________________________________________________
file /home/jhoblitt/github/github3.py/venv/lib/python2.7/site-packages/wheel/test/test_tagopt.py, line 57
def test_explicit_tag(temp_pkg):
E fixture 'temp_pkg' not found
> available fixtures: betamax_parametrized_recorder, betamax_parametrized_session, betamax_recorder, betamax_session, cache, capfd, capsys, doctest_namespace, monkeypatch, pytestconfig, record_xml_property, recwarn, tmpdir, tmpdir_factory
> use 'pytest --fixtures [testpath]' for help on them.
/home/jhoblitt/github/github3.py/venv/lib/python2.7/site-packages/wheel/test/test_tagopt.py:57
___________________________________________________________________ ERROR at setup of test_universal_tag ____________________________________________________________________
file /home/jhoblitt/github/github3.py/venv/lib/python2.7/site-packages/wheel/test/test_tagopt.py, line 68
def test_universal_tag(temp_pkg):
E fixture 'temp_pkg' not found
> available fixtures: betamax_parametrized_recorder, betamax_parametrized_session, betamax_recorder, betamax_session, cache, capfd, capsys, doctest_namespace, monkeypatch, pytestconfig, record_xml_property, recwarn, tmpdir, tmpdir_factory
> use 'pytest --fixtures [testpath]' for help on them.
/home/jhoblitt/github/github3.py/venv/lib/python2.7/site-packages/wheel/test/test_tagopt.py:68
____________________________________________________________ ERROR at setup of test_universal_beats_explicit_tag ____________________________________________________________
file /home/jhoblitt/github/github3.py/venv/lib/python2.7/site-packages/wheel/test/test_tagopt.py, line 79
def test_universal_beats_explicit_tag(temp_pkg):
E fixture 'temp_pkg' not found
> available fixtures: betamax_parametrized_recorder, betamax_parametrized_session, betamax_recorder, betamax_session, cache, capfd, capsys, doctest_namespace, monkeypatch, pytestconfig, record_xml_property, recwarn, tmpdir, tmpdir_factory
> use 'pytest --fixtures [testpath]' for help on them.
/home/jhoblitt/github/github3.py/venv/lib/python2.7/site-packages/wheel/test/test_tagopt.py:79
_______________________________________________________________ ERROR at setup of test_universal_in_setup_cfg _______________________________________________________________
file /home/jhoblitt/github/github3.py/venv/lib/python2.7/site-packages/wheel/test/test_tagopt.py, line 90
def test_universal_in_setup_cfg(temp_pkg):
E fixture 'temp_pkg' not found
> available fixtures: betamax_parametrized_recorder, betamax_parametrized_session, betamax_recorder, betamax_session, cache, capfd, capsys, doctest_namespace, monkeypatch, pytestconfig, record_xml_property, recwarn, tmpdir, tmpdir_factory
> use 'pytest --fixtures [testpath]' for help on them.
/home/jhoblitt/github/github3.py/venv/lib/python2.7/site-packages/wheel/test/test_tagopt.py:90
_______________________________________________________________ ERROR at setup of test_pythontag_in_setup_cfg _______________________________________________________________
file /home/jhoblitt/github/github3.py/venv/lib/python2.7/site-packages/wheel/test/test_tagopt.py, line 102
def test_pythontag_in_setup_cfg(temp_pkg):
E fixture 'temp_pkg' not found
> available fixtures: betamax_parametrized_recorder, betamax_parametrized_session, betamax_recorder, betamax_session, cache, capfd, capsys, doctest_namespace, monkeypatch, pytestconfig, record_xml_property, recwarn, tmpdir, tmpdir_factory
> use 'pytest --fixtures [testpath]' for help on them.
/home/jhoblitt/github/github3.py/venv/lib/python2.7/site-packages/wheel/test/test_tagopt.py:102
_________________________________________________________ ERROR at setup of test_legacy_wheel_section_in_setup_cfg __________________________________________________________
file /home/jhoblitt/github/github3.py/venv/lib/python2.7/site-packages/wheel/test/test_tagopt.py, line 114
def test_legacy_wheel_section_in_setup_cfg(temp_pkg):
E fixture 'temp_pkg' not found
> available fixtures: betamax_parametrized_recorder, betamax_parametrized_session, betamax_recorder, betamax_session, cache, capfd, capsys, doctest_namespace, monkeypatch, pytestconfig, record_xml_property, recwarn, tmpdir, tmpdir_factory
> use 'pytest --fixtures [testpath]' for help on them.
/home/jhoblitt/github/github3.py/venv/lib/python2.7/site-packages/wheel/test/test_tagopt.py:114
__________________________________________________________________ ERROR at setup of test_plat_name_purepy __________________________________________________________________
file /home/jhoblitt/github/github3.py/venv/lib/python2.7/site-packages/wheel/test/test_tagopt.py, line 126
def test_plat_name_purepy(temp_pkg):
E fixture 'temp_pkg' not found
> available fixtures: betamax_parametrized_recorder, betamax_parametrized_session, betamax_recorder, betamax_session, cache, capfd, capsys, doctest_namespace, monkeypatch, pytestconfig, record_xml_property, recwarn, tmpdir, tmpdir_factory
> use 'pytest --fixtures [testpath]' for help on them.
/home/jhoblitt/github/github3.py/venv/lib/python2.7/site-packages/wheel/test/test_tagopt.py:126
___________________________________________________________________ ERROR at setup of test_plat_name_ext ____________________________________________________________________
file /home/jhoblitt/github/github3.py/venv/lib/python2.7/site-packages/wheel/test/test_tagopt.py, line 137
def test_plat_name_ext(temp_ext_pkg):
E fixture 'temp_ext_pkg' not found
> available fixtures: betamax_parametrized_recorder, betamax_parametrized_session, betamax_recorder, betamax_session, cache, capfd, capsys, doctest_namespace, monkeypatch, pytestconfig, record_xml_property, recwarn, tmpdir, tmpdir_factory
> use 'pytest --fixtures [testpath]' for help on them.
/home/jhoblitt/github/github3.py/venv/lib/python2.7/site-packages/wheel/test/test_tagopt.py:137
____________________________________________________________ ERROR at setup of test_plat_name_purepy_in_setupcfg ____________________________________________________________
file /home/jhoblitt/github/github3.py/venv/lib/python2.7/site-packages/wheel/test/test_tagopt.py, line 151
def test_plat_name_purepy_in_setupcfg(temp_pkg):
E fixture 'temp_pkg' not found
> available fixtures: betamax_parametrized_recorder, betamax_parametrized_session, betamax_recorder, betamax_session, cache, capfd, capsys, doctest_namespace, monkeypatch, pytestconfig, record_xml_property, recwarn, tmpdir, tmpdir_factory
> use 'pytest --fixtures [testpath]' for help on them.
/home/jhoblitt/github/github3.py/venv/lib/python2.7/site-packages/wheel/test/test_tagopt.py:151
_____________________________________________________________ ERROR at setup of test_plat_name_ext_in_setupcfg ______________________________________________________________
file /home/jhoblitt/github/github3.py/venv/lib/python2.7/site-packages/wheel/test/test_tagopt.py, line 163
def test_plat_name_ext_in_setupcfg(temp_ext_pkg):
E fixture 'temp_ext_pkg' not found
> available fixtures: betamax_parametrized_recorder, betamax_parametrized_session, betamax_recorder, betamax_session, cache, capfd, capsys, doctest_namespace, monkeypatch, pytestconfig, record_xml_property, recwarn, tmpdir, tmpdir_factory
> use 'pytest --fixtures [testpath]' for help on them.
/home/jhoblitt/github/github3.py/venv/lib/python2.7/site-packages/wheel/test/test_tagopt.py:163
================================================================================= FAILURES ==================================================================================
________________________________________________________________________________ test_pydist ________________________________________________________________________________
def test_pydist():
"""Make sure pydist.json exists and validates against our schema."""
# XXX this test may need manual cleanup of older wheels
> import jsonschema
E ImportError: No module named jsonschema
venv/lib/python2.7/site-packages/wheel/test/test_basic.py:117: ImportError
________________________________________________________________________________ test_keygen ________________________________________________________________________________
def test_keygen():
def get_keyring():
WheelKeys, keyring = tool.get_keyring()
class WheelKeysTest(WheelKeys):
def save(self):
pass
class keyringTest:
@classmethod
def get_keyring(cls):
class keyringTest2:
pw = None
def set_password(self, a, b, c):
self.pw = c
def get_password(self, a, b):
return self.pw
return keyringTest2()
return WheelKeysTest, keyringTest
> tool.keygen(get_keyring=get_keyring)
venv/lib/python2.7/site-packages/wheel/test/test_tool.py:25:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
venv/lib/python2.7/site-packages/wheel/tool/__init__.py:39: in keygen
WheelKeys, keyring = get_keyring()
venv/lib/python2.7/site-packages/wheel/test/test_tool.py:5: in get_keyring
WheelKeys, keyring = tool.get_keyring()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
def get_keyring():
try:
from ..signatures import keys
import keyring
assert keyring.get_keyring().priority
except (ImportError, AssertionError):
> raise WheelError("Install wheel[signatures] (requires keyring, keyrings.alt, pyxdg) for signatures.")
E WheelError: Install wheel[signatures] (requires keyring, keyrings.alt, pyxdg) for signatures.
venv/lib/python2.7/site-packages/wheel/tool/__init__.py:34: WheelError
_____________________________________________________________________________ test_convert_egg ______________________________________________________________________________
def test_convert_egg():
base = pkg_resources.resource_filename('wheel.test', '')
for dist in test_distributions:
distdir = os.path.join(base, dist, 'dist')
> eggs = [e for e in os.listdir(distdir) if e.endswith('.egg')]
E OSError: [Errno 2] No such file or directory: '/home/jhoblitt/github/github3.py/venv/lib/python2.7/site-packages/wheel/test/complex-dist/dist'
venv/lib/python2.7/site-packages/wheel/test/test_basic.py:88: OSError
________________________________________________________________________________ test_unpack ________________________________________________________________________________
def test_unpack():
"""
Make sure 'wheel unpack' works.
This also verifies the integrity of our testing wheel files.
"""
for dist in test_distributions:
distdir = pkg_resources.resource_filename('wheel.test',
os.path.join(dist, 'dist'))
> for wheelfile in (w for w in os.listdir(distdir) if w.endswith('.whl')):
E OSError: [Errno 2] No such file or directory: '/home/jhoblitt/github/github3.py/venv/lib/python2.7/site-packages/wheel/test/complex-dist/dist'
venv/lib/python2.7/site-packages/wheel/test/test_basic.py:99: OSError
________________________________________________________________________________ test_pydist ________________________________________________________________________________
def test_pydist():
"""Make sure pydist.json exists and validates against our schema."""
# XXX this test may need manual cleanup of older wheels
> import jsonschema
E ImportError: No module named jsonschema
venv/lib/python2.7/site-packages/wheel/test/test_basic.py:117: ImportError
________________________________________________________________________________ test_keygen ________________________________________________________________________________
def test_keygen():
def get_keyring():
WheelKeys, keyring = tool.get_keyring()
class WheelKeysTest(WheelKeys):
def save(self):
pass
class keyringTest:
@classmethod
def get_keyring(cls):
class keyringTest2:
pw = None
def set_password(self, a, b, c):
self.pw = c
def get_password(self, a, b):
return self.pw
return keyringTest2()
return WheelKeysTest, keyringTest
> tool.keygen(get_keyring=get_keyring)
venv/lib/python2.7/site-packages/wheel/test/test_tool.py:25:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
venv/lib/python2.7/site-packages/wheel/tool/__init__.py:39: in keygen
WheelKeys, keyring = get_keyring()
venv/lib/python2.7/site-packages/wheel/test/test_tool.py:5: in get_keyring
WheelKeys, keyring = tool.get_keyring()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
def get_keyring():
try:
from ..signatures import keys
import keyring
assert keyring.get_keyring().priority
except (ImportError, AssertionError):
> raise WheelError("Install wheel[signatures] (requires keyring, keyrings.alt, pyxdg) for signatures.")
E WheelError: Install wheel[signatures] (requires keyring, keyrings.alt, pyxdg) for signatures.
venv/lib/python2.7/site-packages/wheel/tool/__init__.py:34: WheelError
============================================================================= warnings summary ==============================================================================
venv/lib/python2.7/site-packages/wheel/test/test_signatures.py::test_ed25519py
/home/jhoblitt/github/github3.py/venv/lib/python2.7/site-packages/wheel/signatures/ed25519py.py:24: RuntimeWarning: ed25519ll should choose random seed.
RuntimeWarning)
-- Docs: http://doc.pytest.org/en/latest/warnings.html
6 failed, 1059 passed, 2 skipped, 1 xfailed, 1 xpassed, 1 warnings, 12 error in 10.34 seconds
ERROR: InvocationError: '/home/jhoblitt/github/github3.py/.tox/py27/bin/py.test'
__________________________________________________________________________________ summary __________________________________________________________________________________
ERROR: py27: commands failed
```
|
closed
|
2017-08-01T16:17:07Z
|
2017-08-01T16:22:16Z
|
https://github.com/sigmavirus24/github3.py/issues/723
|
[] |
jhoblitt
| 1
|
vaexio/vaex
|
data-science
| 2,383
|
[BUG-REPORT] AssertionError while performing math operation on shifted columns
|
**Description**
Let's say I want to calculate something on a column and its shifted values (Lags or Leads). The basic one can be df.A - 2*df.A_shifted. It can easily be done in pandas: `df.A - df.A.shift(1)`. However, VAEX throws an exception saying ` AssertionError:`. Below is the code I used:
```
df = pd.DataFrame(data={'A': [1,2,3],'B':[4,5,6]})
dfv = vaex.from_pandas(df)
```
Pandas:
```
print(df.A - 2 * df.A.shift(1))
output:
0 NaN
1 0.0
2 -1.0
Name: A, dtype: float64
```
Vaex:
```
print((dfv.A - 2 * dfv.shift(1, 'A', fill_value=0).A).to_pandas_series())
output:
AssertionError:
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
Cell In[34], line 1
----> 1 (dfv.A - 2 * dfv.shift(1, 'A', fill_value=0).A).to_pandas_series()
File ~\miniconda3\envs\py38\lib\site-packages\vaex\expression.py:139, in Meta.__new__.<locals>.wrap.<locals>.f(a, b)
137 else:
138 if isinstance(b, Expression):
--> 139 assert b.ds == a.ds
140 b = b.expression
141 elif isinstance(b, (np.timedelta64)):
AssertionError:
```
Initially, I thought vaex fails to do operation on `nan` values so I used `fill_value=0` to make sure nothing fancy is required. Certainly something is wrong because I can do calc using both `A` and `B` columns.
```
print((dfv.A - dfv.B).to_pandas_series())
output:
0 -3
1 -3
2 -3
dtype: int64
```
**Software information**
- Vaex version (`import vaex; vaex.__version__)`:
```
{'vaex': '4.16.0',
'vaex-core': '4.16.1',
'vaex-viz': '0.5.4',
'vaex-hdf5': '0.14.1',
'vaex-server': '0.8.1',
'vaex-astro': '0.9.3',
'vaex-jupyter': '0.8.1',
'vaex-ml': '0.18.1'}
```
- Vaex was installed via: pip / conda-forge / from source : `via pip`
- OS: Windows 11 (Python 3.8)
|
open
|
2023-07-11T11:08:42Z
|
2023-07-11T11:08:42Z
|
https://github.com/vaexio/vaex/issues/2383
|
[] |
msat59
| 0
|
ageitgey/face_recognition
|
machine-learning
| 781
|
ValueError: operands could not be broadcast together with shapes (2,0) (128,)
|
open
|
2019-03-23T17:30:24Z
|
2019-06-20T10:17:52Z
|
https://github.com/ageitgey/face_recognition/issues/781
|
[] |
Profiiii
| 1
|
|
strawberry-graphql/strawberry-django
|
graphql
| 658
|
Implementing Object Level Permissions
|
<!--- Provide a general summary of the changes you want in the title above. -->
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Feature Request Type
- [ ] Core functionality
- [ ] Alteration (enhancement/optimization) of existing feature(s)
- [x] New behavior
## Description
Is there a way to implement Object Level Permissions? Something similar to what we have on Django Rest Framework. This would enable us to determine if a user has access to a particular object or model instance
This would be a great help in a multi-tenancy system where some users would have the same permissions but would only be able to access certain model objects
|
open
|
2024-11-15T03:51:50Z
|
2025-03-20T15:57:40Z
|
https://github.com/strawberry-graphql/strawberry-django/issues/658
|
[] |
moluwole
| 2
|
serengil/deepface
|
machine-learning
| 1,015
|
Exclude Jupyter Notebooks from GitHub Programming Language Stats
|
GitHub's programming language statistics include Jupyter Notebooks, which may skew the results for repositories that heavily use them. This issue is created to address the need for excluding Jupyter Notebooks from the language statistics calculation.
### Proposed Solution
To exclude Jupyter Notebooks from language stats, we can utilize GitHub's Linguist tool by adding a `.gitattributes` file to the repository root with the following content:
```gitattributes
# .gitattributes
*.ipynb linguist-vendored
|
closed
|
2024-02-09T10:50:30Z
|
2024-02-10T18:27:13Z
|
https://github.com/serengil/deepface/issues/1015
|
[
"enhancement"
] |
serengil
| 1
|
streamlit/streamlit
|
python
| 10,879
|
Skills Vs Pay, Titles/Radio Buttons Wont Stay Stuck and Always Reset...
|
### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
Hey,
Apologies if this is confusing or oblivious at all.
Im using the datanerd.tech site to go through and verify some data and trends and what not.
Unfortunately Ive noticed that at the 'Skills Vs Pay' page that it will stay stuck on Job Title and Country Select All, Data Skills Top 10 and Timeframe as all time. If you ever select anything for this and try to load with the selected choices, it will load/buffer but then simply reset and go back to the original choices you get when loading the page-so you basically cant use the page at all.
Not 100% sure whats going on either.
### Reproducible Code Example
```Python
```
### Steps To Reproduce
1. Go to Skills versus Pay
2. Click any of the buttons to filter specifically
3. It simply buffers/loads, but then will completely reset and not give any specific/correct data per what you tried to select, etc.
### Expected Behavior
Show the data/numbers and not reset...
### Current Behavior
It simply reloads and completely resets and never loads the information properly.
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
Windows 11, 64 bit. Chrome.
### Additional Information
<img width="881" alt="Image" src="https://github.com/user-attachments/assets/fe5af1ca-4af9-4db0-aff7-b869be6e76f5" />
|
open
|
2025-03-23T15:41:25Z
|
2025-03-23T15:41:35Z
|
https://github.com/streamlit/streamlit/issues/10879
|
[
"type:bug",
"status:needs-triage"
] |
ipsaphoenix
| 1
|
deepset-ai/haystack
|
pytorch
| 8,118
|
docs: clean up docstrings of OpenAIChatGenerator
|
closed
|
2024-07-30T07:36:22Z
|
2024-07-31T07:45:16Z
|
https://github.com/deepset-ai/haystack/issues/8118
|
[] |
agnieszka-m
| 0
|
|
praw-dev/praw
|
api
| 1,447
|
Revert commit 698b103514932424a222edfadd6ea735db76e954
|
Per #1444 , those \`\`"..."\`\` should not have been removed.
|
closed
|
2020-04-27T11:57:34Z
|
2020-04-27T21:25:49Z
|
https://github.com/praw-dev/praw/issues/1447
|
[
"Bug",
"Documentation"
] |
PythonCoderAS
| 0
|
flasgger/flasgger
|
rest-api
| 122
|
schema validation abort directly
|
schema validation abort directly
` try:
jsonschema.validate(data, main_def)
except ValidationError as e:
abort(Response(str(e), status=400))
`
should add option to only raise an exception and let the caller to handle it
|
closed
|
2017-06-25T11:33:33Z
|
2017-10-28T20:25:13Z
|
https://github.com/flasgger/flasgger/issues/122
|
[
"enhancement"
] |
ghost
| 5
|
jofpin/trape
|
flask
| 284
|
ERROR: cannot import name json
|
I have error: cannot import name json, after I wrote python2 trape.py -h
|
open
|
2020-12-25T13:52:42Z
|
2021-01-31T12:08:28Z
|
https://github.com/jofpin/trape/issues/284
|
[] |
Gestohlener
| 1
|
blacklanternsecurity/bbot
|
automation
| 2,137
|
Include IP, routing table in initial SCAN event
|
In order to correlate scan activity back to a specific agent / IP, we should include network interface information in the initial scan event.
@aconite33 @kerrymilan
|
open
|
2025-01-07T16:03:23Z
|
2025-02-06T00:02:28Z
|
https://github.com/blacklanternsecurity/bbot/issues/2137
|
[
"enhancement"
] |
TheTechromancer
| 0
|
betodealmeida/shillelagh
|
sqlalchemy
| 214
|
Potential Issue with exact on Fields
|
The docs indicate that `exact=True` means no post filtering is needed:
https://github.com/betodealmeida/shillelagh/blob/97197bd564e96a23c5587be5c9e315f7c0e693ea/src/shillelagh/fields.py#L121-L125
however that value seems to be passed to apsw:
https://github.com/betodealmeida/shillelagh/blob/14579e4b8c3159adc4076b36638d13f00dc70609/src/shillelagh/backends/apsw/vt.py#L305
and the docs for that indicate that `False` means sqlalachemy won't do any double checking :
https://github.com/betodealmeida/shillelagh/blob/14579e4b8c3159adc4076b36638d13f00dc70609/src/shillelagh/typing.py#L16-L17
(also consistent with [the apsw docs page](https://rogerbinns.github.io/apsw/vtable.html#apsw.VTTable.BestIndex))
Am I misunderstanding something here?
|
closed
|
2022-03-28T17:11:49Z
|
2022-03-30T15:37:28Z
|
https://github.com/betodealmeida/shillelagh/issues/214
|
[
"bug"
] |
cancan101
| 3
|
Miserlou/Zappa
|
django
| 1,914
|
Passing non JSON serializable object to task throws cryptic error
|
Passing non JSON serializable object to task throws cryptic error and doesnt throw error when running locally.
## Context
I've recently noticed this when passing a Django model instance as a param to a task. The task ran fine when I was running the code locally but it failed when I deployed to lambda.
## Expected Behavior
I expected the task to run.
## Actual Behavior
My code threw: `Object of type '<type>' is not JSON serializable`.
## Possible Fix
It would be nice if this was caught straight away when running locally. Maybe we can add an assertion in the decorator?
## Steps to Reproduce
1. Create a task using the `@task` decorator which has a non JSON serializable parameter
2. Deploy to lambda
3. Run some code that triggers the task
## Your Environment
```
python 3.6
lambda-packages 0.20.0
zappa 0.48.2
```
|
open
|
2019-08-06T10:26:09Z
|
2019-08-06T10:26:09Z
|
https://github.com/Miserlou/Zappa/issues/1914
|
[] |
stan-sack
| 0
|
miguelgrinberg/flasky
|
flask
| 327
|
can flask as a server and node.js as a web ?
|
i know ,node.js is a server frame,but i need it as a web ,can flask as a server and node.js as a web ? thanks
|
closed
|
2018-01-02T07:57:03Z
|
2018-10-14T22:22:50Z
|
https://github.com/miguelgrinberg/flasky/issues/327
|
[
"question"
] |
744216212
| 1
|
microsoft/nni
|
pytorch
| 5,773
|
Which framework to use for Neural Architecture Search: NNI or Archai?
|
Hello,
I have been exploring the open source framework for NAS and came across NNI and Archai. How these two frameworks are different? As both are part of Microsoft research group so a clarification would be really helpful. Additionally, I would appreciate guidance on which framework would be more suitable for me as a user / researcher.
Archai Github: https://github.com/microsoft/archai
Documentation: https://microsoft.github.io/archai/index.html#
Any feedback would be highly appreciated. Thank you!
|
open
|
2024-04-23T15:15:10Z
|
2024-04-23T15:15:10Z
|
https://github.com/microsoft/nni/issues/5773
|
[] |
mkumar73
| 0
|
PeterL1n/BackgroundMattingV2
|
computer-vision
| 79
|
question about the fourth output of basenet:hidden
|
What can it do? Why design this output?
|
closed
|
2021-03-22T02:40:01Z
|
2021-03-22T02:44:27Z
|
https://github.com/PeterL1n/BackgroundMattingV2/issues/79
|
[] |
nostayup
| 1
|
deepset-ai/haystack
|
machine-learning
| 8,035
|
`OpenAPITool` fails to handle OpenAPI specs with `servers` under `paths` and missing `operationId`
|
### Bug Report
**Describe the bug**
When using the `OpenAPITool` with a specific OpenAPI YAML specification, the tool fails to process the request properly. The issue arises due to the `servers` field being under the `paths` section and the absence of an `operationId` on the forecast path, which the tool does not handle correctly.
**Error message**
N/A (No explicit error message was thrown; the tool simply fails to process the request as expected).
**Expected behavior**
The `OpenAPITool` should handle the OpenAPI YAML specification correctly, even when the `servers` field is under the `paths` section and there is no `operationId` on the forecast path. The tool should be able to generate and process the request successfully.
**Additional context**
- Document types: OpenAPI YAML specification
- Preprocessing steps: N/A
- Settings of reader: N/A
**To Reproduce**
Steps to reproduce the behavior:
1. Use the following code with the provided OpenAPI YAML specification URL:
```python
from haystack.dataclasses import ChatMessage
from haystack_experimental.components.tools.openapi import OpenAPITool, LLMProvider
from haystack.utils import Secret
# Using the problematic OpenAPI YAML specification
tool = OpenAPITool(generator_api=LLMProvider.OPENAI,
generator_api_params={"model":"gpt-3.5-turbo"},
spec="https://raw.githubusercontent.com/open-meteo/open-meteo/main/openapi.yml")
results = tool.run(messages=[ChatMessage.from_user("weather forecast for latitude 52.52 and longitude 13.41")])
print(results)
```
2. Observe that the tool fails to process the request properly.
3. Modify the code to use a different OpenAPI YAML specification that does not have the `servers` field under the `paths` section and includes an `operationId` on the forecast path:
```python
from haystack.dataclasses import ChatMessage
from haystack_experimental.components.tools.openapi import OpenAPITool, LLMProvider
from haystack.utils import Secret
tool = OpenAPITool(generator_api=LLMProvider.OPENAI,
generator_api_params={"model":"gpt-3.5-turbo"},
spec="https://gist.githubusercontent.com/vblagoje/a241b63755e19a20c8bc7e8ea0724aba/raw/e8fe5b66a16b7bf4fadb9551788dd71433af1d58/meteo-openapi.yaml")
results = tool.run(messages=[ChatMessage.from_user("weather forecast for latitude 52.52 and longitude 13.41 and set hourly=temperature_2m")])
print(results)
```
4. Observe that the tool processes the request successfully.
The issue seems to be with how the `OpenAPITool` handles certain structures in the OpenAPI YAML specification.
|
closed
|
2024-07-17T07:11:19Z
|
2024-07-18T15:57:31Z
|
https://github.com/deepset-ai/haystack/issues/8035
|
[] |
vblagoje
| 0
|
deepspeedai/DeepSpeed
|
deep-learning
| 5,945
|
nv-nightly CI test failure
|
The Nightly CI for https://github.com/microsoft/DeepSpeed/actions/runs/10499491789 failed.
|
closed
|
2024-08-15T01:41:43Z
|
2024-08-22T18:50:00Z
|
https://github.com/deepspeedai/DeepSpeed/issues/5945
|
[
"ci-failure"
] |
github-actions[bot]
| 1
|
pyro-ppl/numpyro
|
numpy
| 1,010
|
HMM forward algorithm - 20x difference in performance between two comparable approaches
|
This issue links to the following thread:
https://forum.pyro.ai/t/intuition-for-the-difference-between-the-two-hmm-tutorials-forward-algo-vs-marginalization/2775
The following two tutorials for HMMs cover two alternative solutions via the forward algorithm (one explicit, one implicit):
1. [HMM Example semi-supervised](http://num.pyro.ai/en/latest/examples/hmm.html)
2. [Enumating HMM](http://num.pyro.ai/en/latest/examples/hmm_enum.html)
**Context**:
- Using simulate_data function from [1] with the following parameters:
> args = parser.parse_args("-n 2000 --num-words 10 --num-categories 3 --num-supervised 0 --num-unsupervised 2000".split(’ '))
In other words, 1 series of 2000 observed "words" with transmission matrix 3x3 and emission matrix 3x10 (unsupervised = categories are unknown)
- Estimate model_1 on `unsupervised_words` from [1] without any changes
- Adjust model_1 from [2] to solve the same task (output categorical dist + align shapes), vis. `model_1_alt` below
def model_1_alt(unsupervised_words):
num_categories=args.num_categories
num_words=args.num_words
emission_prior=jnp.repeat(0.1, num_words)
transition_prior = jnp.ones(num_categories)
probs_x = numpyro.sample(
"probs_x", dist.Dirichlet(
jnp.broadcast_to(transition_prior, (num_categories, num_categories))).to_event(1)
)
probs_y = numpyro.sample(
"probs_y",dist.Dirichlet(
jnp.broadcast_to(emission_prior, (num_categories, num_words))).to_event(1)
)
def transition_fn(carry, y):
x_prev = carry
x = numpyro.sample("x", dist.Categorical(probs_x[x_prev]))
y = numpyro.sample("y", dist.Categorical(probs_y[x]), obs=y)
return x, None
x_init = 0
scan(transition_fn, (x_init, 0), unsupervised_words)
**Expected behaviour**: Both approaches would achieve similar results within comparable time (eg, +-100%) due to slightly different computations
**Actual behaviour**: Approach with model_1 from [1] runs c. 20x faster than model_1_alt based on [2] (130s vs 2,700s, respectively; time measures the whole fitting procedure)
**Setup**:
- Seeds: Simulation done with random.PRNGKey(1); mcmc is done with random.PRNGKey(2)
- ran on CPU on AWS EC2 m5.xlarge
- Python (3.7.8), Numpyro installed from master (0.6.0), Jax (0.2.12)
|
closed
|
2021-04-14T21:15:35Z
|
2021-04-28T18:22:25Z
|
https://github.com/pyro-ppl/numpyro/issues/1010
|
[
"question"
] |
svilupp
| 2
|
tflearn/tflearn
|
data-science
| 214
|
Thread-specifc models in reinforcement Learning example
|
In the [reinforcement learning example](https://github.com/tflearn/tflearn/blob/master/examples/reinforcement_learning/atari_1step_qlearning.py), the note described that this example implemented the 1-step Q-learning algorithm from [this paper](http://arxiv.org/pdf/1602.01783v1.pdf). However, I found that the model does not maintain thread-specific parameters for each of the parallel threads. Does it means that the model is a simplification of the original asynchronous 1-step Q-learning model?
|
closed
|
2016-07-21T00:19:27Z
|
2016-07-22T02:17:11Z
|
https://github.com/tflearn/tflearn/issues/214
|
[] |
yukezhu
| 2
|
autogluon/autogluon
|
data-science
| 4,573
|
[BUG]Why can't we just specify the distributed operation of 'gloo' in hyperparameters
|
['ddp', 'ddp_spawn', 'ddp_fork', 'ddp_notebook', 'ddp_find_unused_parameters_false', 'ddp_find_unused_parameters_true', 'ddp_spawn_find_unused_parameters_false', 'ddp_spawn_find_unused_parameters_true', 'ddp_fork_find_unused_parameters_false', 'ddp_fork_find_unused_parameters_true', 'ddp_notebook_find_unused_parameters_false', 'ddp_notebook_find_unused_parameters_true', 'deepspeed', 'deepspeed_stage_1', 'deepspeed_stage_2', 'deepspeed_stage_2_offload', 'deepspeed_stage_3', 'deepspeed_stage_3_offload', 'deepspeed_stage_3_offload_nvme', 'fsdp', 'fsdp_cpu_offload', 'single_device', 'single_xla', 'xla_debug', 'xla', 'single_tpu']
Why can I only configure the following distributed model? This makes windows users very sad, especially with highly encapsulated classes like MultiModalPredictor, modification is out of the question!
|
open
|
2024-10-23T14:58:55Z
|
2024-10-23T14:58:55Z
|
https://github.com/autogluon/autogluon/issues/4573
|
[
"bug: unconfirmed",
"Needs Triage"
] |
Ultraman6
| 0
|
kymatio/kymatio
|
numpy
| 454
|
On output of 3D scattering
|
According to the following comment in #158, for a 3D input of size (B, M, N, O), the outputs , based on the method, would have the following dimensions:
> The 3D version returns different outputs for different versions.
>
> - "standard" would return `(B, P, M/2**J, N/2**J, O/2**J)`
> - "integral" would return `(B, P, 1, 1, 1)` (or `(B, P)` not sure)
> - "local" would return `(B, P, 1)`
>
> The reason for "integral" is practically dimensionality reduction (among others) because 3D is typically large. For "local" you have to ask @louity to be certain but I think it's to identify interesting regions in the moduli (e.g. midpoints between bonded atoms in quantum chemistry)
>
> _Originally posted by @gexarcha in https://github.com/kymatio/kymatio/issues/158#issuecomment-440062949_
Now I have 2 issues:
1. for an input data with size `torch.Size([1, 128, 128, 128])`, using standard method, I get an output of size `torch.Size([1, 10, 5, 16, 16, 16])` for a scattering with `J = 3` and `L = 4`. This does not follow `(B, P, M/2**J, N/2**J, O/2**J)` that was mentioned earlier, but looks like `(B, ?, ?, M/2**J, N/2**J, O/2**J)`. I was wondering how to compute values for the dimensions marked by `?`.
2. Again, as mentioned in another comment in https://github.com/kymatio/kymatio/issues/158#issuecomment-441827979 and implemented in `scattering3d_qm7`, zeroth order coefficients are computed using `compute_integrals`. However, the example implemented in `compute_speed.py` does not use `compute_integrals`. I was wondering which one is correct, to use or not to use `compute_integrals`?
Thanks!
|
closed
|
2019-11-21T01:57:35Z
|
2022-06-20T20:17:50Z
|
https://github.com/kymatio/kymatio/issues/454
|
[] |
nshervt
| 9
|
cupy/cupy
|
numpy
| 8,973
|
Use of cp.pad causes significant performance degradation in downstream code compared to np.pad
|
### Description
When padding arrays directly on the GPU using `cp.pad`, I observed significantly slower performance in _subsequent_ code compared to padding on the CPU using `np.pad` and then transferring the data to the GPU with `cp.asarray`.
CuPy is awesome! Any help would be greatly appreciated! 😁
**What Happened:**
Padding an image using `cp.pad` resulted in slower execution times for subsequent GPU operations. The performance degradation was consistently observed across several functions (`correlation_fft`, `correlation_mul`, and `memory_test`) that I have included for convenience.
**Expected Behavior:**
Ideally `cp.pad` should provide performance comparable to or better than the CPU padding approach (given that data transfer is generally expensive).
If the above is not possible, then a ⚠️ warning note should be present in the documentation.
Additionally, I would have expected ` cp.ascontiguousarray` to fix it, but it seems to have no effect.
**Performance Results:**
```
memory_test - cp.pad is 1.43x slower
correlation_mul - cp.pad is 1.70x slower
correlation_fft - cp.pad is 5.16x slower
```
### To Reproduce
```py
import numpy as np
import cupy as cp
import time
def correlation_fft(template, image):
fft_shape = (image.shape[0] + template.shape[0] - 1, image.shape[1] + template.shape[1] - 1)
template_fft = cp.fft.rfft2(template, s=fft_shape)
image_fft = cp.fft.rfft2(image, s=fft_shape)
result = template_fft * cp.conj(image_fft)
return cp.fft.irfft2(result)
def correlation_mul(template, image):
crop_image = image[:template.shape[0], :template.shape[1]]
return template * crop_image
def memory_test(template, image):
"""Just sum the template multiple times to test memory bandwidth"""
result = template
for _ in range(10):
result = result + template
return result
def benchmark(func, image, template, n_iter=10):
"""Benchmark a function by running it n_iter times"""
times = []
for _ in range(n_iter):
start = time.perf_counter()
func(template, image)
cp.cuda.Stream.null.synchronize()
times.append(time.perf_counter() - start)
return np.mean(times), np.std(times)
# Create sample data with your dimensions
template_shape = (191, 204)
image_shape = (545, 768)
template_padding = ((44, 45), (38, 38))
# Create random data
cpu_image = np.random.random(image_shape).astype(np.float32)
gpu_image = cp.asarray(cpu_image)
cpu_template = np.random.random(template_shape).astype(np.float32)
gpu_template = cp.asarray(cpu_template)
# Test 1: Pad on GPU
padded_template_gpu = cp.pad(gpu_template, template_padding, mode='constant')
padded_template_gpu = cp.ascontiguousarray(padded_template_gpu) # Ensure the memory is contiguous HAS NO EFFECT
mean_gpu, std_gpu = benchmark(memory_test, gpu_image, padded_template_gpu, 10)
print(f"memory_test time when pad done on GPU (cp.pad): {mean_gpu:.6f} ± {std_gpu:.6f} seconds")
# Test 2: Pad on CPU
padded_template_cpu = np.pad(cpu_template, template_padding, mode='constant')
padded_template_cpu_to_gpu = cp.asarray(padded_template_cpu)
mean_cpu, std_cpu = benchmark(memory_test, gpu_image, padded_template_cpu_to_gpu, 10)
print(f"memory_test time when pad done on CPU (np.pad): {mean_cpu:.6f} ± {std_cpu:.6f} seconds")
# Results.
print(f"\nRatio (GPU vs CPU): GPU is {mean_gpu/mean_cpu:.2f}x slower")
# Verify results are similar
cp.testing.assert_allclose(
correlation_fft(padded_template_gpu, gpu_image),
correlation_fft(padded_template_cpu_to_gpu, gpu_image),
rtol=1e-5
)
```
### Installation
Wheel (`pip install cupy-***`)
### Environment
```
OS : Windows-10-10.0.26100-SP0
Python Version : 3.11.9
CuPy Version : 13.3.0
CuPy Platform : NVIDIA CUDA
NumPy Version : 2.1.3
SciPy Version : 1.14.1
Cython Build Version : 0.29.36
Cython Runtime Version : None
CUDA Root : C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6
nvcc PATH : C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6\bin\nvcc.EXE
CUDA Build Version : 12060
CUDA Driver Version : 12060
CUDA Runtime Version : 12060 (linked to CuPy) / 12060 (locally installed)
CUDA Extra Include Dirs : []
cuBLAS Version : 120604
cuFFT Version : 11300
cuRAND Version : 10307
cuSOLVER Version : (11, 7, 1)
cuSPARSE Version : 12504
NVRTC Version : (12, 6)
Thrust Version : 200600
CUB Build Version : 200600
Jitify Build Version : 1a0ca0e
cuDNN Build Version : None
cuDNN Version : None
NCCL Build Version : None
NCCL Runtime Version : None
cuTENSOR Version : None
cuSPARSELt Build Version : None
Device 0 Name : NVIDIA GeForce RTX 4090 Laptop GPU
Device 0 Compute Capability : 89
Device 0 PCI Bus ID : 0000:01:00.0
```
Also tested on an AMD Threadripper machine with a 4070.
### Additional Information
_No response_
|
open
|
2025-02-20T17:15:58Z
|
2025-02-26T19:44:10Z
|
https://github.com/cupy/cupy/issues/8973
|
[
"issue-checked"
] |
JohnHardy
| 4
|
postmanlabs/httpbin
|
api
| 100
|
port to python 3
|
it's time.
|
closed
|
2013-06-14T20:09:29Z
|
2018-04-26T17:50:59Z
|
https://github.com/postmanlabs/httpbin/issues/100
|
[] |
kennethreitz
| 6
|
ultralytics/yolov5
|
deep-learning
| 12,674
|
KeyError: 'train'
|
### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Transferred 308/362 items from yolov5\weights\yolov5s.pt
Traceback (most recent call last):
File "yolov5/train.py", line 542, in <module>
train(hyp, opt, device, tb_writer)
File "yolov5/train.py", line 100, in train
train_path = data_dict['train']
KeyError: 'train'
### Additional
_No response_
|
closed
|
2024-01-27T10:18:24Z
|
2024-10-20T19:38:19Z
|
https://github.com/ultralytics/yolov5/issues/12674
|
[
"question",
"Stale"
] |
2ljz
| 3
|
CorentinJ/Real-Time-Voice-Cloning
|
tensorflow
| 725
|
Loss of Tacotron
|
May I ask what actually is the loss while training the synthesizers.
For the new Pytorch repo, as mentioned in #653 , the loss is the sum of:
1. L1 loss +L2 loss of decoder output
2. L2 loss of Mel spectrogram after Post-Net
3. Cross entropy of Stop Token
I can also see it in the code:
` # Backward pass `
` m1_loss = F.mse_loss(m1_hat, mels) + F.l1_loss(m1_hat, mels) `
` m2_loss = F.mse_loss(m2_hat, mels) `
` stop_loss = F.binary_cross_entropy(stop_pred, stop) `
` loss = m1_loss + m2_loss + stop_loss `
However, what is the loss in the old Tensorflow repo?
In the original paper it simply mentioned
"We extend [15] by augmenting the L2 loss on the predicted spectrogram with an additional L1 loss. "
The RTVC thesis it is stated that
"The loss function is the L2 loss between the predicted and ground truth mel spectrograms. "
In the code there are some items related to loss including
eval_losses, before_losses, after_losses, stop_token_losses, linear_losses, linear_loss
Are they related to the loss? Or did I miss the lines about the loss?
|
closed
|
2021-04-05T14:04:40Z
|
2021-04-11T06:25:59Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/725
|
[] |
chankl3579
| 2
|
nicodv/kmodes
|
scikit-learn
| 53
|
kmodes package dependencies
|
Is it possible to install kmodes with following latest packages
numpy 1.13.1
scipy 0.19.1
scikit-learn 0.19.0
|
closed
|
2017-09-07T06:33:27Z
|
2017-09-07T18:17:14Z
|
https://github.com/nicodv/kmodes/issues/53
|
[
"question"
] |
musharif
| 1
|
gradio-app/gradio
|
data-science
| 10,601
|
change event doesn't be detected when value is component in gr.Dataframe
|
### Describe the bug
I have a seven columns' dataframe, and the final column is html style selection component.
When I select the value in such column, change event cannot be detected.
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
def __correct__(row):
return """<select>
<option value='0' selected>Unknown</option>
<option value='1'>NASR</option>
<option value='2'>Microsoft</option>
<option value='3'>Reference</option>
</select>"""
def load_file(src_full_path_name):
df = pd.read_csv(src_full_path_name)
df['Correct'] = df['Correct'].apply(__correct__)
return gr.Dataframe(value=df, type="pandas", datatype='html',
max_height=750, interactive=True, show_label=False,
render=True, visible=True, label='table')
def capture_edited_data(dataframe):
if len(dataframe) == 0:
return gr.update()
# This is not be called, when I select different values
with gr.Blocks() as demo:
source_file = gr.Dropdown(label="Choose one File", multiselect=False, elem_id='source_file')
load_btn = gr.Button("Load", elem_id="load_btn", interactive=True)
table_output = gr.Dataframe(row_count=1, col_count=7, label="table", show_label=False)
table_output.change(fn=capture_edited_data, inputs=table_output, outputs=[])
load_btn.click(fn=load_file, inputs=[source_file], outputs=[table_output])
if __name__ == '__main__':
demo.launch(server_name='localhost',
server_port=54321,
share=False, debug=True)
```
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
aiofiles==23.2.1
annotated-types==0.7.0
anyio==4.8.0
certifi==2024.12.14
charset-normalizer==3.4.1
click==8.1.8
dohq-artifactory==1.0.0
exceptiongroup==1.2.2
fastapi==0.115.6
ffmpy==0.5.0
filelock==3.17.0
fsspec==2024.12.0
gradio==5.12.0
gradio_client==1.5.4
h11==0.14.0
httpcore==1.0.7
httpx==0.28.1
huggingface-hub==0.27.1
idna==3.10
Jinja2==3.1.5
markdown-it-py==3.0.0
MarkupSafe==2.1.5
mdurl==0.1.2
numpy==2.2.2
orjson==3.10.15
packaging==24.2
pandas==2.2.3
pillow==11.1.0
pip==24.3.1
pydantic==2.10.5
pydantic_core==2.27.2
pydub==0.25.1
Pygments==2.19.1
PyJWT==2.10.1
python-dateutil==2.9.0.post0
python-multipart==0.0.20
pytz==2024.2
PyYAML==6.0.2
requests==2.32.3
rich==13.9.4
ruff==0.9.2
safehttpx==0.1.6
semantic-version==2.10.0
setuptools==73.0.1
shellingham==1.5.4
six==1.17.0
sniffio==1.3.1
starlette==0.41.3
tomlkit==0.13.2
tqdm==4.67.1
typer==0.15.1
typing_extensions==4.12.2
tzdata==2025.1
urllib3==2.3.0
uvicorn==0.34.0
websockets==14.2
wheel==0.44.0
```
### Severity
Blocking usage of gradio
|
closed
|
2025-02-17T04:14:08Z
|
2025-03-04T19:16:16Z
|
https://github.com/gradio-app/gradio/issues/10601
|
[
"bug",
"💾 Dataframe"
] |
Yb2S3Man
| 1
|
dask/dask
|
numpy
| 10,842
|
`make_meta` over a Dask Dataframe returns a reference, not a new object
|
**Describe the issue**:
Reading the documentation of [`make_meta`](https://docs.dask.org/en/stable/generated/dask.dataframe.utils.make_meta.html) it states that
> This method creates meta-data based on the type of x
so my understanding is that a new object is returned. However, one can check that when running `make_meta` over a Dask Dataframe a reference to the dataframe meta is returned. Thus, if any change is made to the returned meta, the meta of the Dataframe is modified as well.
**Minimal Complete Verifiable Example**:
```python
import dask.dataframe as dd
import pandas as pd
df = pd.DataFrame({
"col1": [1, 2, 3],
"col2": [4, 5, 6],
})
df = dd.from_pandas(df, npartitions=2)
print(df.columns) # returns Index(['col1', 'col2'], dtype='object')
from dask.dataframe.utils import make_meta
meta = make_meta(df)
meta["flag"] = pd.Series([], dtype="bool")
print(df.columns) # returns Index(['col1', 'col2', 'flag'], dtype='object')
```
**Anything else we need to know?**:
In my experience `make_meta` is very useful to obtain the current meta of a Dataframe and then update it with the necessary changes to provide appropriate meta information to methods such as `map_partitions` or `assign`, so that Dask knows how you intend to change the structure of the Dataframe. But since `make_meta` returns a reference it seems we are forced to make changes to a copy of this meta object, which is inconvenient. Is there any design reason for returning a reference instead of a copy?
**Environment**:
- Dask version: 2024.1.0
- Python version: 3.12.1
- Operating System: Ubuntu 22.04.3 LTS
- Install method (conda, pip, source): pip
|
open
|
2024-01-22T09:08:36Z
|
2024-01-22T09:08:47Z
|
https://github.com/dask/dask/issues/10842
|
[
"needs triage"
] |
albarji
| 0
|
encode/httpx
|
asyncio
| 2,286
|
Setting Cookies on request
|
Hello, i can't find the correct way to set all needed cookies. This is the cookie tab from the response:

Now i want to use client and tried a lot of different things to set the cookies but none of them solved it. Here is the current code:
```py
data1 = {"deviceId": "", "userName": username, "password": password}
cookies = httpx.Cookies()
resp1 = httpx.post("https://api.fedex.com/user/v4/login", headers=headers, json=data1, follow_redirects=True)
try:
with httpx.Client(proxies=proxy_auth if proxy_type != "" else None) as client:
client.cookies.set(name="FedEx-Secure-Login-Cookie", value="true", domain="https://fedex.com/secure-login/")
client.cookies.set(name="_abck", value=resp1.cookies.get("_abck"), domain="https://edex.com/secure-login/")
client.cookies.set(name="expires", value=resp1.cookies.get("expires"), domain="https://edex.com/secure-login/")
resp2 = client.post("https://api.fedex.com/user/v4/login", headers=headers, json=data1, follow_redirects=True)
except httpx.ProxyError:
with httpx.Client(proxies=None) as client: resp2 = client.post("https://api.fedex.com/user/v4/login", headers=headers, json=data1, follow_redirects=True)
set_cookie = f"siteDC=edc; xacc=DE; fdx_cbid=30207856121656437730110340266011; fdx_locale=de_DE; fdx_redirect=de-de; cc_path=de; isTablet=false; isMobile=false; isWireless=false; aemserver=PROD-P-dotcom-p0008480.prod.cloud.fedex.com; _abck={resp1.cookies.get('_abck')}; ADRUM=s=1656441915426&r=https%3A%2F%2Fwww.fedex.com%2Fsecure-login%2F%3F-1854754532; s_invisit=true; gpv_pageName=fedex/apps/wlgn/login-credentials; mboxEdgeCluster=37"
headers["Cookie"] = set_cookie
print(resp1.cookies)
print(cookies.set(name="FedEx-Secure-Login-Cookie", value="true"))
print(cookies.set(name="_abck", value=resp1.cookies.get("_abck"), domain="https://edex.com/secure-login/"))
print(cookies.get("FedEx-Secure-Login-Cookie"))
print("-----------------------------------------------------")
print(resp2.text)
# print(resp2.json()["errors"][0]["code"])
```
The error i'm seening every time:
```
<Cookies[<Cookie _abck=9D653246B520E9AEF2535C75C976823E~-1~YAAQFuZlX7wNOo6BAQAAoPWrtghDDCi7c261ywX9SyxVtDo91R0pgBQnSUW6flrVg8KOrg5fkuJaFeczho1eCwDtPKn3eOsOarydFUK4F6UE00/fUEITlzn0kTf/Jiuny1j7+jULCNUR5QetwGseKivnipCbwQbpKZePNd1aURkM0PrjMOjaJjgGDl6SU6hDhWaBiRYNboIlBSz1bX60dKKZfUFTjXMAfmARuk1kVTYGt4bsIt/Fs7TEAFB20qhwj/rU5qEjkdrAMLoJ8uMSL5N5i8zi6t36qevc9U3G8+iG9d8fjvIh8HHJ5MyIwjy1qsJcn+8YYwTZnaQtK+paNhyly0CnCQ62UdsaS33hBEuhKqY9//Qu7yNrVi0=~-1~-1~-1 for .fedex.com />, <Cookie bm_sz=118A0D88115C5E503300B088C49F1D00~YAAQFuZlX70NOo6BAQAAoPWrthBks2eyNDxhigIZtL7o/c+a0YDQOAGUre1B4x2Z0tnW5r8SFalRRg0cs05N1umnjBjImwpcA2Z6/vHU96P96RTT+01ZCVzypgGKelWNlQENBpkt8vQH1HTVjR8Nh+PL9Pu+mtB5cEn/WGcoIKIrLEUfRzHachJ9VZ66EZ/9JMWCIPAPWkXxPwwiZEn5jHfJsvnCi8qMHw8xl4M7y9KR42JzK0HDk85CyB82hHrMdTk1k6Xu+7db7LG+O3/XXrGQG6alX2Nrgukz+0zNjqf4Fg==~4273478~4404549 for .fedex.com />]>
None
None
true
-----------------------------------------------------
<HTML><HEAD>
<TITLE>Access Denied</TITLE>
</HEAD><BODY>
<H1>Access Denied</H1>
You don't have permission to access "http://api.fedex.com/user/v4/login" on this server.<P>
Reference #18.16e6655f.1656627131.2679623b
</BODY>
</HTML>
```
This error is cookie releated because when i manually enter the correct cookie in the request header, everything works fine.
|
closed
|
2022-06-30T22:18:51Z
|
2022-07-01T08:08:06Z
|
https://github.com/encode/httpx/issues/2286
|
[] |
FuckingToasters
| 0
|
gunthercox/ChatterBot
|
machine-learning
| 1,582
|
AttributeError: module 'chatterbot.logic' has no attribute 'UnitConversion'
|
While running the following example from the Git Example page,
``` Python
from chatterbot import ChatBot
bot = ChatBot(
'Unit Converter',
logic_adapters=[
'chatterbot.logic.UnitConversion',
]
)
questions = [
'How many meters are in a kilometer?',
'How many meters are in one inch?',
'0 celsius to fahrenheit',
'one hour is how many minutes ?'
]
# Prints the convertion given the specific question
for question in questions:
response = bot.get_response(question)
print(question + ' - Response: ' + response.text)
```
I get following error
```
AttributeError: module 'chatterbot.logic' has no attribute 'UnitConversion'
```
I checked the chatterbot/logic/__init__.py and its content are as follows
```
__all__ = (
'UnitConversion',
'LogicAdapter',
'BestMatch',
'MathematicalEvaluation',
'SpecificResponseAdapter',
'TimeLogicAdapter',
```
My chatterbot version is 1.0.0a4..
Please let me know how to resolve this error.
|
closed
|
2019-01-23T08:50:21Z
|
2019-11-21T09:23:17Z
|
https://github.com/gunthercox/ChatterBot/issues/1582
|
[] |
sarangjain40
| 2
|
stanfordnlp/stanza
|
nlp
| 986
|
AttributeError: 'NoneType' object has no attribute 'enum_types_by_name'
|
I got this error while running stanza
import stanza
from stanza.pipeline.core import Pipeline
from stanza.pipeline.constituency_processor import ConstituencyProcessor
import stanza.models.constituency.trainer as trainer
from stanza.server.parser_eval import EvaluateParser
from stanza.protobuf import to_text
from stanza.pipeline.core import Pipeline
from stanza.pipeline.constituency_processor import ConstituencyProcessor
mport stanza.models.constituency.trainer as trainer
from stanza.server.parser_eval import EvaluateParser
from stanza.protobuf import to_text
AttributeError: 'NoneType' object has no attribute 'enum_types_by_name'
|
closed
|
2022-03-31T16:50:17Z
|
2022-06-07T19:28:40Z
|
https://github.com/stanfordnlp/stanza/issues/986
|
[
"question"
] |
mennatallah644
| 4
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.