repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
huggingface/text-generation-inference
|
nlp
| 2,199
|
error warming up cohere/aya-23-35b model: error in `reshape_and_cache` function with TGI 2.1.1
|
### System Info
`ghcr.io/huggingface/text-generation-inference:2.1.1` docker image
running on 2 nvidia H100s
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [X] An officially supported command
- [ ] My own modifications
### Reproduction
Run the docker image with the following environment variables:
* MODEL_ID: "CohereForAI/aya-23-35B"
* REVISION: "31d6fd858f20539a55401c7ad913086f54d9ca2c"
* MAX_BATCH_PREFILL_TOKENS: "8192"
* MAX_INPUT_TOKENS: "8191"
* MAX_TOTAL_TOKENS: "8192"
* MAX_CONCURRENT_REQUESTS: "300"
* MAX_STOP_SEQUENCES: "55"
Run into error when warming up model.
Similar error as #1738
```
Method Warmup encountered an error.
Traceback (most recent call last):
File "/opt/conda/bin/text-generation-server", line 8, in <module>
sys.exit(app())
File "/opt/conda/lib/python3.10/site-packages/typer/main.py", line 311, in __call__
return get_command(self)(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/typer/core.py", line 778, in main
return _main(
File "/opt/conda/lib/python3.10/site-packages/typer/core.py", line 216, in _main
rv = self.invoke(ctx)
File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 1434, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
...
File "/opt/conda/lib/python3.10/site-packages/text_generation_server/server.py", line 125, in Warmup
max_supported_total_tokens = self.model.warmup(batch)
File "/opt/conda/lib/python3.10/site-packages/text_generation_server/models/flash_causal_lm.py", line 985, in warmup
_, batch, _ = self.generate_token(batch)
File "/opt/conda/lib/python3.10/site-packages/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/lib/python3.10/site-packages/text_generation_server/models/flash_causal_lm.py", line 1253, in generate_token
out, speculative_logits = self.forward(batch, adapter_data)
File "/opt/conda/lib/python3.10/site-packages/text_generation_server/models/flash_causal_lm.py", line 1178, in forward
logits, speculative_logits = self.model.forward(
File "/opt/conda/lib/python3.10/site-packages/text_generation_server/models/custom_modeling/flash_cohere_modeling.py", line 521, in forward
hidden_states = self.model(
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/text_generation_server/models/custom_modeling/flash_cohere_modeling.py", line 470, in forward
hidden_states, residual = layer(
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/text_generation_server/models/custom_modeling/flash_cohere_modeling.py", line 397, in forward
attn_output = self.self_attn(
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/text_generation_server/models/custom_modeling/flash_cohere_modeling.py", line 289, in forward
reshape_and_cache(key, value, kv_cache[0], kv_cache[1], slots)
File "/opt/conda/lib/python3.10/site-packages/text_generation_server/layers/attention/cuda.py", line 31, in reshape_and_cache
cache_ops.reshape_and_cache(
TypeError: reshape_and_cache(): incompatible function arguments. The following argument types are supported:
1. (arg0: torch.Tensor, arg1: torch.Tensor, arg2: torch.Tensor, arg3: torch.Tensor, arg4: torch.Tensor, arg5: str, arg6: float) -> None
Invoked with: tensor([[[ ....], [...], [bunch of 0 tensors]]])
```
### Expected behavior
No errors when warming up model
|
closed
|
2024-07-08T05:31:54Z
|
2024-07-18T14:01:13Z
|
https://github.com/huggingface/text-generation-inference/issues/2199
|
[] |
Jason-CKY
| 1
|
ShishirPatil/gorilla
|
api
| 286
|
Is it possible to fine-tune without Azure AI studio?
|
Is it possible to implement RAFT without Azure AI studio? I am planning to use open-source models.
|
open
|
2024-03-25T12:03:17Z
|
2024-03-28T23:15:26Z
|
https://github.com/ShishirPatil/gorilla/issues/286
|
[] |
mark86v1
| 2
|
MaxHalford/prince
|
scikit-learn
| 61
|
Unreliable results while FAMD model is returned from a function.
|
Hi,
I did notice that when you return the FAMD transform object from a function, the next results are sometimes unreliable. The following code snippet elucidates the issue :
```python
import prince
import pandas as pd
from sklearn import linear_model
cv_train = pd.read_csv('cv_training3.csv')
famd = prince.FAMD(
n_components=25,
n_iter=10,
copy=True,
check_input=True,
engine='auto',
random_state=42)
x_cols = [col for col in cv_train.columns if col != 'LogSalePrice']
print(x_cols)
X = cv_train[x_cols]
famd_model = famd.fit(X)
transformed_X = famd_model.transform(X)
my_model = linear_model.LinearRegression()
my_model.fit(transformed_X, cv_train['LogSalePrice'].values.ravel())
cv_validation = pd.read_csv('cv_validation3.csv')
old_predictions = my_model.predict(transformed_X)
print(old_predictions.mean())
new_X = cv_validation[x_cols]
new_transformed_X = famd_model.transform(new_X)
new_predictions = my_model.predict(new_transformed_X)
print(new_predictions.mean())
```
We have the following output :
```
['MSSubClass', 'LogGrLivArea', 'MSZoning']
12.030180212569563
-2158687260190.6848
```
Now consider the following code snippet :
```python
import pandas as pd
import prince
from sklearn import linear_model
def get_trained_model_and_transform(X, Y, num_components=25, num_iter=20):
famd = prince.FAMD(
n_components=num_components,
n_iter=num_iter,
copy=True,
check_input=True,
engine='auto',
random_state=42)
famd_model = famd.fit(X)
transformed_X = famd_model.transform(X)
my_model = linear_model.LinearRegression()
my_model.fit(transformed_X,Y)
return (my_model, famd_model)
cv_train = pd.read_csv('cv_training3.csv')
x_cols = [col for col in cv_train.columns if col != 'LogSalePrice']
print(x_cols)
X = cv_train[x_cols]
(my_model, famd_model) = get_trained_model_and_transform(X,
cv_train['LogSalePrice'].values.ravel())
cv_validation = pd.read_csv('cv_validation3.csv')
transformed_X = famd_model.transform(X)
old_predictions = my_model.predict(transformed_X)
print(old_predictions.mean())
new_X = cv_validation[x_cols]
new_transformed_X = famd_model.transform(new_X)
new_predictions = my_model.predict(new_transformed_X)
print(new_predictions.mean()
)
```
The output is the following :
```
['MSSubClass', 'LogGrLivArea', 'MSZoning']
12.0300579767497
11.946205619238583
```
The only difference between the 2 snippets, **is that we have wrapped the transform and model building to a function in the second whereas we have not done that in the first** (and the results are drastically different).
I am attaching the input data files here as well. Feel free to let me know if you need more information.
NOTE : For the sake of ease of testing, I have included both these snippets as two scripts (script1.py and script2.py ) in the attached folder. I have added a screenshot of the output on my mac terminal as well.
**However, I also tested these two scripts on an online python interface (https://repl.it/languages/python3) and they look to be giving identical outputs.** I am using the production version of prince module for these scripts and had to change my matplotlib backend to Agg on my mac to get this working on my mac. Would that be the reason for the different result ?
[data.zip](https://github.com/MaxHalford/prince/files/3040285/data.zip)
Thanks
|
closed
|
2019-04-02T19:04:05Z
|
2019-06-24T14:10:16Z
|
https://github.com/MaxHalford/prince/issues/61
|
[] |
babinu-uthup-4JESUS-zz
| 4
|
kizniche/Mycodo
|
automation
| 708
|
No translation possible
|
Hallo!
Since some upgrades, I can no longer change Web Interface to any other language. I even reinstalled mycodo without success. I think in 7.7.9 the translation still works. Then I immediately upgraded to 7.8.4 and it did not work anymore. In 7.9.1 neither. Only Chinese, Serbian, Swedish and Norwegian are working.
RPi3 B
Raspbian buster lite
And if it works again you should update the /home/pi/Mycodo/mycodo/mycodo_flask/translations/....../LC_MESSAGES/messages.po files. There are now many new terms without translation in the WebUI. Is this also possible with Notepad ++? Then I could update German.
PS: And while you're at it: could you look at my log file. After reinstalling (Raspbian+Mycodo), I can not generate more PWM on any pin.

|
closed
|
2019-11-01T12:23:02Z
|
2019-11-03T21:26:24Z
|
https://github.com/kizniche/Mycodo/issues/708
|
[
"bug"
] |
grux77
| 22
|
plotly/dash-table
|
dash
| 318
|
ability to include column headers when copying-and-pasting
|
Currently, when you copy and paste contents from the DataTable into Excel, only the currently selected contents are included. This item would enable the copy-and-paste to include the column headers when pasting the data.
This setting would be set by the Dash developer through a property like “`include_headers_on_copy_and_paste=True`”.
Note that this would be independent of the rows or columns (#317) that have been selected
|
closed
|
2018-12-19T22:26:20Z
|
2019-08-08T20:28:18Z
|
https://github.com/plotly/dash-table/issues/318
|
[
"dash-type-enhancement",
"dash-meta-sponsored",
"size: 3"
] |
chriddyp
| 5
|
graphdeco-inria/gaussian-splatting
|
computer-vision
| 1,120
|
2060(12G) running 3DGS appears out of memory?
|
I only used 20 images,It works fine for me on my 4060(8G).
|
open
|
2024-12-25T07:47:48Z
|
2024-12-25T07:50:05Z
|
https://github.com/graphdeco-inria/gaussian-splatting/issues/1120
|
[] |
foooolish
| 1
|
CorentinJ/Real-Time-Voice-Cloning
|
deep-learning
| 1,029
|
Results of training on my own dataset
|
Hi,When I train with my own dataset, the result is as shown in the following picture.Can you tell me where is the problem? May be the number of data is too small? (my dataset includes 20 minutes of voice data in total ) Looking forward to your reply :


)
|
open
|
2022-02-28T08:09:30Z
|
2022-03-01T14:50:55Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1029
|
[] |
dongdongdashen
| 1
|
Zeyi-Lin/HivisionIDPhotos
|
fastapi
| 216
|
反向代理后页面加载不全
|
在docker部署好以后,通过NPM反向代理,访问后如下图,直接访问页面没有问题

|
open
|
2024-12-05T04:33:53Z
|
2025-02-04T11:48:20Z
|
https://github.com/Zeyi-Lin/HivisionIDPhotos/issues/216
|
[] |
kyonefq
| 1
|
vitalik/django-ninja
|
rest-api
| 1,404
|
How to change a response timezone based on a query parameter?
|
Hi, I want to know how to change a response timezone based on a query parmeter like https:://example.com?timezone=utc , https://example.com?timezone=Asia/Tokyo ,or https://exampl.com?timezone=Asia/Kolkata?
I think the answer of https://github.com/vitalik/django-ninja/issues/786 is related to this problem,
but I can't understand how to change timezone dynamicaly.
```
from ninja.renderers import JSONRenderer
from ninja.responses import NinjaJSONEncoder
class MyJsonEncoder(NinjaJSONEncoder):
def default(self, o):
if isinstance(o, datetime.datetime):
###############################
#####how to set timezone dynamically?
##################################
return o.astimezone().isoformat()
return super().default(o)
class MyJsonRenderer(JSONRenderer):
encoder_class = NinjaJSONEncoder
api = NinjaAPI(renderer=MyJsonRenderer())
```
|
closed
|
2025-02-05T02:12:41Z
|
2025-02-05T02:26:20Z
|
https://github.com/vitalik/django-ninja/issues/1404
|
[] |
owari-taro
| 1
|
python-arq/arq
|
asyncio
| 280
|
Specifying set of weekday strings in cron jobs
|
When scheduling a cron job, something like `weekday = {'mon', 'wed', 'fri'}` isn't parsable by the worker. Relatedly, `WeekdayOptionType` allows int, set of ints, or string literal; it does not allow a set of string literals.
This was mildly inconvenient for me, and I was wondering if I could get your thoughts on the following changes to allow for a set of string literals when specifying the weekday.
In `typing.py`, there would be one additional import, and WeekdayOptionType would be changed as follows:
```python
from typing import get_args
OptionType = Union[None, Set[int], int]
WeekdayStrType = Literal['mon', 'tues', 'wed', 'thurs', 'fri', 'sat', 'sun']
WeekdayOptionType = Union[OptionType, WeekdayStrType, Set[WeekdayStrType]]
WEEKDAYS = get_args(WeekdayStrType)
```
In the `next_cron` function in `cron.py`, weekday would be parsed as follows:
```python
weekday_str_to_int = lambda s : WEEKDAYS.index(s.lower())
if isinstance(weekday, str):
weekday = weekday_str_int(weekday)
if (type(weekday) is set) and isinstance(next(iter(weekday)), str):
weekday = set(map(weekday_str_to_int, weekday))
```
Thanks for this wonderful package.
|
open
|
2021-11-19T21:16:39Z
|
2021-11-19T21:16:39Z
|
https://github.com/python-arq/arq/issues/280
|
[] |
leeek
| 0
|
sgl-project/sglang
|
pytorch
| 4,339
|
[Bug] sglang-router load model failed randomly
|
### Checklist
- [x] 1. I have searched related issues but cannot get the expected help.
- [ ] 2. The bug has not been fixed in the latest version.
- [x] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
- [ ] 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [x] 5. Please use English, otherwise it will be closed.
### Describe the bug
1. In this test, tp=2 & dp=4, all the worker finished "Capture cuda graph",but miss one worker。
2. The log
[2025-03-12 08:52:39 DP0 TP1] Registering 1311 cuda graph addresses
[2025-03-12 08:52:39 DP0 TP0] Registering 1311 cuda graph addresses
[2025-03-12 08:52:39 DP0 TP1] Capture cuda graph end. Time elapsed: 12.96 s
[2025-03-12 08:52:39 DP0 TP0] Capture cuda graph end. Time elapsed: 12.96 s
96%|█████████▌| 22/23 [00:13<00:00, 2.05it/s][2025-03-12 08:52:40 DP0 TP0] max_total_num_tokens=935444, chunked_prefill_size=8192, max_prefill_tokens=16384, max_running_requests=4097, context_len=32768
[2025-03-12 08:52:40 DP0 TP1] max_total_num_tokens=935444, chunked_prefill_size=8192, max_prefill_tokens=16384, max_running_requests=4097, context_len=32768
[2025-03-12 08:52:40] INFO: Started server process [1029]
[2025-03-12 08:52:40] INFO: Waiting for application startup.
[2025-03-12 08:52:40] INFO: Application startup complete.
[2025-03-12 08:52:40] INFO: Uvicorn running on http://0.0.0.0:32300 (Press CTRL+C to quit)
100%|██████████| 23/23 [00:13<00:00, 1.69it/s]
[2025-03-12 08:52:40 DP3 TP1] Registering 1311 cuda graph addresses
[2025-03-12 08:52:40 DP3 TP0] Registering 1311 cuda graph addresses
[2025-03-12 08:52:40 DP3 TP1] Capture cuda graph end. Time elapsed: 13.67 s
[2025-03-12 08:52:40 DP3 TP0] Capture cuda graph end. Time elapsed: 13.63 s
100%|██████████| 23/23 [00:13<00:00, 1.68it/s]
[2025-03-12 08:52:40 DP2 TP1] Registering 1311 cuda graph addresses
[2025-03-12 08:52:40 DP2 TP0] Registering 1311 cuda graph addresses
[2025-03-12 08:52:40 DP2 TP1] Capture cuda graph end. Time elapsed: 13.74 s
[2025-03-12 08:52:40 DP2 TP0] Capture cuda graph end. Time elapsed: 13.72 s
96%|█████████▌| 22/23 [00:13<00:00, 1.92it/s][2025-03-12 08:52:41 DP3 TP1] max_total_num_tokens=935444, chunked_prefill_size=8192, max_prefill_tokens=16384, max_running_requests=4097, context_len=32768
[2025-03-12 08:52:41 DP3 TP0] max_total_num_tokens=935444, chunked_prefill_size=8192, max_prefill_tokens=16384, max_running_requests=4097, context_len=32768
[2025-03-12 08:52:41] INFO: Started server process [1032]
[2025-03-12 08:52:41] INFO: Waiting for application startup.
[2025-03-12 08:52:41] INFO: Application startup complete.
[2025-03-12 08:52:41] INFO: Uvicorn running on http://0.0.0.0:32303 (Press CTRL+C to quit)
[2025-03-12 08:52:41 DP2 TP0] max_total_num_tokens=935444, chunked_prefill_size=8192, max_prefill_tokens=16384, max_running_requests=4097, context_len=32768
[2025-03-12 08:52:41 DP2 TP1] max_total_num_tokens=935444, chunked_prefill_size=8192, max_prefill_tokens=16384, max_running_requests=4097, context_len=32768
[2025-03-12 08:52:41] INFO: Started server process [1031]
[2025-03-12 08:52:41] INFO: Waiting for application startup.
[2025-03-12 08:52:41] INFO: Application startup complete.
[2025-03-12 08:52:41] INFO: Uvicorn running on http://0.0.0.0:32302 (Press CTRL+C to quit)
100%|██████████| 23/23 [00:14<00:00, 1.60it/s]
[2025-03-12 08:52:41 DP1 TP1] Registering 1311 cuda graph addresses
[2025-03-12 08:52:41 DP1 TP0] Registering 1311 cuda graph addresses
[2025-03-12 08:52:41 DP1 TP1] Capture cuda graph end. Time elapsed: 14.41 s
[2025-03-12 08:52:41 DP1 TP0] Capture cuda graph end. Time elapsed: 14.42 s
[2025-03-12 08:52:41 DP1 TP1] max_total_num_tokens=935444, chunked_prefill_size=8192, max_prefill_tokens=16384, max_running_requests=4097, context_len=32768
[2025-03-12 08:52:41 DP1 TP0] max_total_num_tokens=935444, chunked_prefill_size=8192, max_prefill_tokens=16384, max_running_requests=4097, context_len=32768
[2025-03-12 08:52:41] INFO: Started server process [1030]
[2025-03-12 08:52:41] INFO: Waiting for application startup.
[2025-03-12 08:52:41] INFO: Application startup complete.
[2025-03-12 08:52:41] INFO: Uvicorn running on http://0.0.0.0:32301 (Press CTRL+C to quit)
[2025-03-12 08:52:42] INFO: 127.0.0.1:39620 - "GET /get_model_info HTTP/1.1" 200 OK
[2025-03-12 08:52:42 DP3 TP0] Prefill batch. #new-seq: 1, #new-token: 7, #cached-token: 0, cache hit rate: 0.00%, token usage: 0.00, #running-req: 0, #queue-req: 0
[2025-03-12 08:52:42] INFO: 127.0.0.1:37470 - "GET /get_model_info HTTP/1.1" 200 OK
[2025-03-12 08:52:42 DP2 TP0] Prefill batch. #new-seq: 1, #new-token: 7, #cached-token: 0, cache hit rate: 0.00%, token usage: 0.00, #running-req: 0, #queue-req: 0
[2025-03-12 08:52:42] INFO: 127.0.0.1:46776 - "GET /get_model_info HTTP/1.1" 200 OK
[2025-03-12 08:52:42 DP1 TP0] Prefill batch. #new-seq: 1, #new-token: 7, #cached-token: 0, cache hit rate: 0.00%, token usage: 0.00, #running-req: 0, #queue-req: 0
2025-03-12 08:52:43,589 - INFO - flashinfer.jit: Loading JIT ops: batch_prefill_with_kv_cache_dtype_q_bf16_dtype_kv_bf16_dtype_o_bf16_dtype_idx_i32_head_dim_qk_128_head_dim_vo_128_posenc_0_use_swa_False_use_logits_cap_False_f16qk_False_sm90
2025-03-12 08:52:43,634 - INFO - flashinfer.jit: Loading JIT ops: batch_prefill_with_kv_cache_dtype_q_bf16_dtype_kv_bf16_dtype_o_bf16_dtype_idx_i32_head_dim_qk_128_head_dim_vo_128_posenc_0_use_swa_False_use_logits_cap_False_f16qk_False_sm90
2025-03-12 08:52:43,638 - INFO - flashinfer.jit: Loading JIT ops: batch_prefill_with_kv_cache_dtype_q_bf16_dtype_kv_bf16_dtype_o_bf16_dtype_idx_i32_head_dim_qk_128_head_dim_vo_128_posenc_0_use_swa_False_use_logits_cap_False_f16qk_False_sm90
2025-03-12 08:52:43,671 - INFO - flashinfer.jit: Loading JIT ops: batch_prefill_with_kv_cache_dtype_q_bf16_dtype_kv_bf16_dtype_o_bf16_dtype_idx_i32_head_dim_qk_128_head_dim_vo_128_posenc_0_use_swa_False_use_logits_cap_False_f16qk_False_sm90
2025-03-12 08:52:43,716 - INFO - flashinfer.jit: Finished loading JIT ops: batch_prefill_with_kv_cache_dtype_q_bf16_dtype_kv_bf16_dtype_o_bf16_dtype_idx_i32_head_dim_qk_128_head_dim_vo_128_posenc_0_use_swa_False_use_logits_cap_False_f16qk_False_sm90
2025-03-12 08:52:43,844 - INFO - flashinfer.jit: Finished loading JIT ops: batch_prefill_with_kv_cache_dtype_q_bf16_dtype_kv_bf16_dtype_o_bf16_dtype_idx_i32_head_dim_qk_128_head_dim_vo_128_posenc_0_use_swa_False_use_logits_cap_False_f16qk_False_sm90
2025-03-12 08:52:44,002 - INFO - flashinfer.jit: Finished loading JIT ops: batch_prefill_with_kv_cache_dtype_q_bf16_dtype_kv_bf16_dtype_o_bf16_dtype_idx_i32_head_dim_qk_128_head_dim_vo_128_posenc_0_use_swa_False_use_logits_cap_False_f16qk_False_sm90
2025-03-12 08:52:44,162 - INFO - flashinfer.jit: Finished loading JIT ops: batch_prefill_with_kv_cache_dtype_q_bf16_dtype_kv_bf16_dtype_o_bf16_dtype_idx_i32_head_dim_qk_128_head_dim_vo_128_posenc_0_use_swa_False_use_logits_cap_False_f16qk_False_sm90
2025-03-12 08:52:44,283 - INFO - flashinfer.jit: Loading JIT ops: batch_prefill_with_kv_cache_dtype_q_bf16_dtype_kv_bf16_dtype_o_bf16_dtype_idx_i32_head_dim_qk_128_head_dim_vo_128_posenc_0_use_swa_False_use_logits_cap_False_f16qk_False_sm90
2025-03-12 08:52:44,287 - INFO - flashinfer.jit: Loading JIT ops: batch_prefill_with_kv_cache_dtype_q_bf16_dtype_kv_bf16_dtype_o_bf16_dtype_idx_i32_head_dim_qk_128_head_dim_vo_128_posenc_0_use_swa_False_use_logits_cap_False_f16qk_False_sm90
2025-03-12 08:52:44,427 - INFO - flashinfer.jit: Finished loading JIT ops: batch_prefill_with_kv_cache_dtype_q_bf16_dtype_kv_bf16_dtype_o_bf16_dtype_idx_i32_head_dim_qk_128_head_dim_vo_128_posenc_0_use_swa_False_use_logits_cap_False_f16qk_False_sm90
2025-03-12 08:52:44,559 - INFO - flashinfer.jit: Finished loading JIT ops: batch_prefill_with_kv_cache_dtype_q_bf16_dtype_kv_bf16_dtype_o_bf16_dtype_idx_i32_head_dim_qk_128_head_dim_vo_128_posenc_0_use_swa_False_use_logits_cap_False_f16qk_False_sm90
[2025-03-12 08:52:44] INFO: 127.0.0.1:37472 - "POST /generate HTTP/1.1" 200 OK
[2025-03-12 08:52:44] The server is fired up and ready to roll!
[2025-03-12 08:52:44] INFO: 127.0.0.1:39622 - "POST /generate HTTP/1.1" 200 OK
[2025-03-12 08:52:44] The server is fired up and ready to roll!
[2025-03-12 08:52:45] INFO: 127.0.0.1:46778 - "POST /generate HTTP/1.1" 200 OK
[2025-03-12 08:52:45] The server is fired up and ready to roll!
[Router (Rust)] 2025-03-12 08:52:47 - INFO - Worker http://0.0.0.0:32300 health check is pending with error: error sending request for url (http://0.0.0.0:32300/health)
[2025-03-12 08:52:47] INFO: 127.0.0.1:46802 - "GET /health HTTP/1.1" 200 OK
[2025-03-12 08:52:47] INFO: 127.0.0.1:37518 - "GET /health HTTP/1.1" 200 OK
[2025-03-12 08:52:47] INFO: 127.0.0.1:39674 - "GET /health HTTP/1.1" 200 OK
[Router (Rust)] 2025-03-12 08:52:47 - INFO - Unhealthy workers:
[Router (Rust)] 2025-03-12 08:52:47 - INFO - http://0.0.0.0:32300 - Error: error sending request for url (http://0.0.0.0:32300/health)
[Router (Rust)] 2025-03-12 08:52:58 - INFO - Worker http://0.0.0.0:32300 health check is pending with error: error sending request for url (http://0.0.0.0:32300/health)
### Reproduction
python3 -m sglang_router.launch_server --router-worker-startup-timeout-secs 300 --model-path ${hf_model_path} --port 9999 --host 0.0.0.0 --tp 2 --dp-size 4 --mem-fraction-static 0.187 --enable-memory-saver
### Environment
2025-03-12 09:32:13,701 - INFO - flashinfer.jit: Prebuilt kernels not found, using JIT backend
INFO 03-12 09:32:19 __init__.py:194] No platform detected, vLLM is running on UnspecifiedPlatform
Python: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0]
CUDA available: True
GPU 0,1,2,3,4,5,6,7: NVIDIA H800
GPU 0,1,2,3,4,5,6,7 Compute Capability: 9.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.6, V12.6.77
CUDA Driver Version: 535.54.03
PyTorch: 2.5.0a0+e000cf0ad9.nv24.10
sglang: 0.4.3.post2
sgl_kernel: 0.0.3.post6
flashinfer: 0.2.2.post1
triton: 3.1.0
transformers: 4.48.3
torchao: 0.9.0
numpy: 1.26.4
aiohttp: 3.10.5
fastapi: 0.115.7
hf_transfer: 0.1.9
huggingface_hub: 0.28.0
interegular: 0.3.3
modelscope: 1.23.2
orjson: 3.10.15
packaging: 23.2
psutil: 6.0.0
pydantic: 2.9.2
multipart: 0.0.20
zmq: 26.2.0
uvicorn: 0.34.0
uvloop: 0.21.0
vllm: 0.7.3.dev0+g0408efc6.d20250305
openai: 1.65.3
anthropic: 0.49.0
decord: 0.6.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 NIC1 NIC2 NIC3 NIC4 NIC5 NIC6 NIC7 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV8 NV8 NV8 NV8 NV8 NV8 NV8 PIX NODE NODE NODE SYS SYS SYS SYS 0-85 0 N/A
GPU1 NV8 X NV8 NV8 NV8 NV8 NV8 NV8 NODE PIX NODE NODE SYS SYS SYS SYS 0-85 0 N/A
GPU2 NV8 NV8 X NV8 NV8 NV8 NV8 NV8 NODE NODE PIX NODE SYS SYS SYS SYS 0-85 0 N/A
GPU3 NV8 NV8 NV8 X NV8 NV8 NV8 NV8 NODE NODE NODE PIX SYS SYS SYS SYS 0-85 0 N/A
GPU4 NV8 NV8 NV8 NV8 X NV8 NV8 NV8 SYS SYS SYS SYS PIX NODE NODE NODE 86-171 1 N/A
GPU5 NV8 NV8 NV8 NV8 NV8 X NV8 NV8 SYS SYS SYS SYS NODE PIX NODE NODE 86-171 1 N/A
GPU6 NV8 NV8 NV8 NV8 NV8 NV8 X NV8 SYS SYS SYS SYS NODE NODE PIX NODE 86-171 1 N/A
GPU7 NV8 NV8 NV8 NV8 NV8 NV8 NV8 X SYS SYS SYS SYS NODE NODE NODE PIX 86-171 1 N/A
NIC0 PIX NODE NODE NODE SYS SYS SYS SYS X NODE NODE NODE SYS SYS SYS SYS
NIC1 NODE PIX NODE NODE SYS SYS SYS SYS NODE X NODE NODE SYS SYS SYS SYS
NIC2 NODE NODE PIX NODE SYS SYS SYS SYS NODE NODE X NODE SYS SYS SYS SYS
NIC3 NODE NODE NODE PIX SYS SYS SYS SYS NODE NODE NODE X SYS SYS SYS SYS
NIC4 SYS SYS SYS SYS PIX NODE NODE NODE SYS SYS SYS SYS X NODE NODE NODE
NIC5 SYS SYS SYS SYS NODE PIX NODE NODE SYS SYS SYS SYS NODE X NODE NODE
NIC6 SYS SYS SYS SYS NODE NODE PIX NODE SYS SYS SYS SYS NODE NODE X NODE
NIC7 SYS SYS SYS SYS NODE NODE NODE PIX SYS SYS SYS SYS NODE NODE NODE X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_bond_0
NIC1: mlx5_bond_1
NIC2: mlx5_bond_2
NIC3: mlx5_bond_3
NIC4: mlx5_bond_4
NIC5: mlx5_bond_5
NIC6: mlx5_bond_6
NIC7: mlx5_bond_7
Hypervisor vendor: KVM
ulimit soft: 1048576
|
open
|
2025-03-12T09:33:14Z
|
2025-03-13T06:58:36Z
|
https://github.com/sgl-project/sglang/issues/4339
|
[] |
fmantianxing
| 2
|
plotly/plotly.py
|
plotly
| 4,806
|
plotly doesn't render html for data with long descriptions
|
When trying to create a scatter plot with long descriptions (~5k chars), the html file stops running. Unfortunately, the description cannot be shortened. Here's a code snipped that will help reproduce the problem:
```
import numpy as np
import pandas as pd
import plotly.express as px
embedding2d = # generate np.array with shape: (389000, 2)
x = embedding2d[:, 0]
y = embedding2d[:, 1]
data = pd.DataFrame({
'x': x,
'y': y,
'text': # generate list of 389000 samples with 10k chars each sample
})
# Визуализация с plotly
fig = px.scatter(
data, x='x', y='y',
#template='plotly_dark',
hover_data=['text'], # Добавляем данные, которые будут отображаться при наведении
title="2D Vector Visualization with Labels",
)
fig.update_layout(
width=1520, height=685,
title_x=0.5
)
fig.show()
fig.write_html('visualization.html') # open browser
```
P.s. Changing browser doesn't help. I waited about 20 minutes
|
open
|
2024-10-17T12:54:12Z
|
2024-10-22T19:27:29Z
|
https://github.com/plotly/plotly.py/issues/4806
|
[
"bug",
"P3"
] |
JohnConnor123
| 0
|
AirtestProject/Airtest
|
automation
| 478
|
ios11 或12 怎么做双击home 键啊
|
如题
|
closed
|
2019-07-29T01:35:45Z
|
2020-03-30T07:12:52Z
|
https://github.com/AirtestProject/Airtest/issues/478
|
[] |
tong-yao
| 3
|
miguelgrinberg/Flask-SocketIO
|
flask
| 1,486
|
A question about async_mode and block
|
I am trying to complete a python backend server so that it could send messages to the front webpage when I receive data from Kafka. I found that when I set the async_mode as 'eventlet', the backend could not use socketio.emit to send the message to the front webpage. It looks like the communication is blocked. However, when I set the async_mode as ''threading" the program works fine.
Here is how I set async_mode:
`socketio = SocketIO(app, async_mode='threading', cors_allowed_origins='*')`
Here is how I use socketio.emit:
```
consumer = KafkaConsumer("test", bootstrap_servers=["192.168.142.139:9092"])
def background_thread():
count = 0
global consumer
for msg in consumer:
count += 1
socketio.sleep(0)
data_json = msg.value.decode('utf8')
socketio.emit('my_response', {'data': data_json, 'count': count})
```
Therefore, I want to ask why this happen? What is the difference between that two async_mode? Is the 'threading' async_mode would cause any problem If I want to deploy the server?
|
closed
|
2021-02-22T15:23:09Z
|
2021-02-23T11:03:55Z
|
https://github.com/miguelgrinberg/Flask-SocketIO/issues/1486
|
[
"question"
] |
north-horse
| 2
|
deezer/spleeter
|
deep-learning
| 811
|
[Discussion] How to update Spleeter properly
|
<!-- Please respect the title [Discussion] tag. -->
Hi,
I currentkly have Spleeter Version: 2.3.1 installed on my macMini M1.
How do I properly update it to the latest version?
Cheers Marc
|
open
|
2022-12-10T09:39:56Z
|
2022-12-13T10:16:30Z
|
https://github.com/deezer/spleeter/issues/811
|
[
"question"
] |
daslicht
| 2
|
ray-project/ray
|
data-science
| 51,333
|
[RLlib] Basic PPO script throws obscure error when building RLModule
|
### What happened + What you expected to happen
Using the repro script below, I get the following error:
> Traceback (most recent call last):
> File "/Users/artur/miniforge3/envs/footsies/lib/python3.12/site-packages/ray/rllib/core/rl_module/rl_module.py", line 102, in build
> module = self.module_class(
> ^^^^^^^^^^^^^^^^^^
> File "/Users/artur/miniforge3/envs/footsies/lib/python3.12/site-packages/ray/rllib/algorithms/ppo/torch/default_ppo_torch_rl_module.py", line 24, in __init__
> super().__init__(*args, **kwargs, catalog_class=catalog_class)
> File "/Users/artur/miniforge3/envs/footsies/lib/python3.12/site-packages/ray/rllib/core/rl_module/torch/torch_rl_module.py", line 50, in __init__
> RLModule.__init__(self, *args, **kwargs)
> File "/Users/artur/miniforge3/envs/footsies/lib/python3.12/site-packages/ray/rllib/core/rl_module/rl_module.py", line 467, in __init__
> self.setup()
> File "/Users/artur/miniforge3/envs/footsies/lib/python3.12/site-packages/ray/rllib/algorithms/ppo/default_ppo_rl_module.py", line 31, in setup
> self.catalog.actor_critic_encoder_config.base_encoder_config,
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> AttributeError: 'NoneType' object has no attribute 'actor_critic_encoder_config'
>
> During handling of the above exception, another exception occurred:
>
> Traceback (most recent call last):
> File "/Users/artur/code/footsies/main.py", line 14, in <module>
> algo = config.build()
> ^^^^^^^^^^^^^^
> File "/Users/artur/miniforge3/envs/footsies/lib/python3.12/site-packages/ray/rllib/utils/deprecation.py", line 128, in _ctor
> return obj(*args, **kwargs)
> ^^^^^^^^^^^^^^^^^^^^
> File "/Users/artur/miniforge3/envs/footsies/lib/python3.12/site-packages/ray/rllib/algorithms/algorithm_config.py", line 5417, in build
> return self.build_algo(*args, **kwargs)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/Users/artur/miniforge3/envs/footsies/lib/python3.12/site-packages/ray/rllib/algorithms/algorithm_config.py", line 958, in build_algo
> return algo_class(
> ^^^^^^^^^^^
> File "/Users/artur/miniforge3/envs/footsies/lib/python3.12/site-packages/ray/rllib/algorithms/algorithm.py", line 528, in __init__
> super().__init__(
> File "/Users/artur/miniforge3/envs/footsies/lib/python3.12/site-packages/ray/tune/trainable/trainable.py", line 157, in __init__
> self.setup(copy.deepcopy(self.config))
> File "/Users/artur/miniforge3/envs/footsies/lib/python3.12/site-packages/ray/rllib/algorithms/algorithm.py", line 631, in setup
> self.env_runner_group = EnvRunnerGroup(
> ^^^^^^^^^^^^^^^
> File "/Users/artur/miniforge3/envs/footsies/lib/python3.12/site-packages/ray/rllib/env/env_runner_group.py", line 198, in __init__
> self._setup(
> File "/Users/artur/miniforge3/envs/footsies/lib/python3.12/site-packages/ray/rllib/env/env_runner_group.py", line 293, in _setup
> self._local_env_runner = self._make_worker(
> ^^^^^^^^^^^^^^^^^^
> File "/Users/artur/miniforge3/envs/footsies/lib/python3.12/site-packages/ray/rllib/env/env_runner_group.py", line 1207, in _make_worker
> return self.env_runner_cls(**kwargs)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/Users/artur/miniforge3/envs/footsies/lib/python3.12/site-packages/ray/rllib/env/single_agent_env_runner.py", line 118, in __init__
> self.make_module()
> File "/Users/artur/miniforge3/envs/footsies/lib/python3.12/site-packages/ray/rllib/env/single_agent_env_runner.py", line 691, in make_module
> self.module = module_spec.build()
> ^^^^^^^^^^^^^^^^^^^
> File "/Users/artur/miniforge3/envs/footsies/lib/python3.12/site-packages/ray/rllib/core/rl_module/rl_module.py", line 113, in build
> module = self.module_class(module_config)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/Users/artur/miniforge3/envs/footsies/lib/python3.12/site-packages/ray/rllib/algorithms/ppo/torch/default_ppo_torch_rl_module.py", line 24, in __init__
> super().__init__(*args, **kwargs, catalog_class=catalog_class)
> File "/Users/artur/miniforge3/envs/footsies/lib/python3.12/site-packages/ray/rllib/core/rl_module/torch/torch_rl_module.py", line 50, in __init__
> RLModule.__init__(self, *args, **kwargs)
> File "/Users/artur/miniforge3/envs/footsies/lib/python3.12/site-packages/ray/rllib/core/rl_module/rl_module.py", line 408, in __init__
> deprecation_warning(
> File "/Users/artur/miniforge3/envs/footsies/lib/python3.12/site-packages/ray/rllib/utils/deprecation.py", line 48, in deprecation_warning
> raise ValueError(msg)
> ValueError: `RLModule(config=[RLModuleConfig])` has been deprecated. Use `RLModule(observation_space=.., action_space=.., inference_only=.., learner_only=.., model_config=..)` instead.
> (SingleAgentEnvRunner pid=66294) A.L.E: Arcade Learning Environment (version 0.10.1+6a7e0ae)
> (SingleAgentEnvRunner pid=66294) [Powered by Stella]
> (SingleAgentEnvRunner pid=66294) 2025-03-13 13:58:13,761 WARNING rl_module.py:430 -- Didn't create a Catalog object for your RLModule! If you are not using the new API stack yet, make sure to switch it off in your config: `config.api_stack(enable_rl_module_and_learner=False, enable_env_runner_and_connector_v2=False)`. All algos use the new stack by default. Ignore this message, if your RLModule does not use a Catalog to build its sub-components.
> (SingleAgentEnvRunner pid=66294) 2025-03-13 13:58:13,761 WARNING deprecation.py:50 -- DeprecationWarning: `RLModule(config=[RLModuleConfig object])` has been deprecated. Use `RLModule(observation_space=.., action_space=.., inference_only=.., model_config=.., catalog_class=..)` instead. This will raise an error in the future!
> (SingleAgentEnvRunner pid=66294) 2025-03-13 13:58:13,761 WARNING deprecation.py:50 -- DeprecationWarning: `get_rl_module_config` has been deprecated. Use `RLModule(*, observation_space=.., action_space=.., ....)` instead. This will raise an error in the future!
> (SingleAgentEnvRunner pid=66294) Exception raised in creation task: The actor died because of an error raised in its creation task, ray::SingleAgentEnvRunner.__init__() (pid=66294, ip=127.0.0.1, actor_id=c3440375d741e12b59d7f85501000000, repr=<ray.rllib.env.single_agent_env_runner.SingleAgentEnvRunner object at 0x38717e6c0>)
> (SingleAgentEnvRunner pid=66294) ^^^^^^^^^^^^^^^^^^
> (SingleAgentEnvRunner pid=66294) File "/Users/artur/miniforge3/envs/footsies/lib/python3.12/site-packages/ray/rllib/core/rl_module/rl_module.py", line 408, in __init__ [repeated 7x across cluster] (Ray deduplicates logs by default. Set RAY_DEDUP_LOGS=0 to disable log deduplication, or see https://docs.ray.io/en/master/ray-observability/user-guides/configure-logging.html#log-deduplication for more options.)
> (SingleAgentEnvRunner pid=66294) super().__init__(*args, **kwargs, catalog_class=catalog_class) [repeated 2x across cluster]
> (SingleAgentEnvRunner pid=66294) RLModule.__init__(self, *args, **kwargs) [repeated 2x across cluster]
> (SingleAgentEnvRunner pid=66294) self.setup()
> (SingleAgentEnvRunner pid=66294) File "/Users/artur/miniforge3/envs/footsies/lib/python3.12/site-packages/ray/rllib/algorithms/ppo/default_ppo_rl_module.py", line 31, in setup
> (SingleAgentEnvRunner pid=66294) self.catalog.actor_critic_encoder_config.base_encoder_config,
> (SingleAgentEnvRunner pid=66294) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> (SingleAgentEnvRunner pid=66294) AttributeError: 'NoneType' object has no attribute 'actor_critic_encoder_config'
> (SingleAgentEnvRunner pid=66294) During handling of the above exception, another exception occurred:
> (SingleAgentEnvRunner pid=66294) ray::SingleAgentEnvRunner.__init__() (pid=66294, ip=127.0.0.1, actor_id=c3440375d741e12b59d7f85501000000, repr=<ray.rllib.env.single_agent_env_runner.SingleAgentEnvRunner object at 0x38717e6c0>)
> (SingleAgentEnvRunner pid=66294) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> (SingleAgentEnvRunner pid=66294) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [repeated 2x across cluster]
> (SingleAgentEnvRunner pid=66294) self.make_module()
> (SingleAgentEnvRunner pid=66294) File "/Users/artur/miniforge3/envs/footsies/lib/python3.12/site-packages/ray/rllib/env/single_agent_env_runner.py", line 691, in make_module
> (SingleAgentEnvRunner pid=66294) self.module = module_spec.build()
> (SingleAgentEnvRunner pid=66294) ^^^^^^^^^^^^^^^^^^^
> (SingleAgentEnvRunner pid=66294) File "/Users/artur/miniforge3/envs/footsies/lib/python3.12/site-packages/ray/rllib/core/rl_module/rl_module.py", line 113, in build
> (SingleAgentEnvRunner pid=66294) module = self.module_class(module_config)
> (SingleAgentEnvRunner pid=66294) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> (SingleAgentEnvRunner pid=66294) deprecation_warning(
> (SingleAgentEnvRunner pid=66294) File "/Users/artur/miniforge3/envs/footsies/lib/python3.12/site-packages/ray/rllib/utils/deprecation.py", line 48, in deprecation_warning
> (SingleAgentEnvRunner pid=66294) raise ValueError(msg)
> (SingleAgentEnvRunner pid=66294) ValueError: `RLModule(config=[RLModuleConfig])` has been deprecated. Use `RLModule(observation_space=.., action_space=.., inference_only=.., learner_only=.., model_config=..)` instead.
>
### Versions / Dependencies
Tested on 2.42.1 and 2.43.0
### Reproduction script
```
import gymnasium as gym
from ray.rllib.algorithms.ppo import PPOConfig
config = PPOConfig()
config.api_stack(
enable_rl_module_and_learner=1,
enable_env_runner_and_connector_v2=1
)
config.environment(env="ale_py:ALE/Pong-v5")
algo = config.build()
algo.train()
```
### Issue Severity
None
|
open
|
2025-03-13T13:02:00Z
|
2025-03-23T21:49:16Z
|
https://github.com/ray-project/ray/issues/51333
|
[
"bug",
"P1",
"rllib"
] |
ArturNiederfahrenhorst
| 7
|
explosion/spaCy
|
data-science
| 13,498
|
Multiple INFIX inside span cannot be recognized
|
A span containing more than one INFIX "tokens" will not be recognized.

## Expected behaviour
all 4 date strings should be recognized.
## How to reproduce the behaviour
Was run in a jupyter notebook for displaying purpose
```python
from spacy.tokenizer import Tokenizer
import re
import spacy
records = [
("10/02/2015", {"spans": {"sc": [(0, 10, "DATE")]}}),
("10/02.2015", {"spans": {"sc": [(0, 10, "DATE")]}}),
("10/2015", {"spans": {"sc": [(0, 7, "DATE")]}}),
("10.2015", {"spans": {"sc": [(0, 7, "DATE")]}}),
]
infix_re = re.compile(r"""[/\.]""")
def custom_tokenizer(nlp):
return Tokenizer(nlp.vocab, infix_finditer=infix_re.finditer)
model = spacy.blank("EN")
pipe = model.add_pipe("spancat")
pipe.add_label("DATE")
model.tokenizer = custom_tokenizer(model)
model.initialize()
trainData = [spacy.training.Example.from_dict(model.make_doc(t), a) for t, a in records]
optimizer = model.begin_training()
for i in range(1000):
model.update(trainData, sgd=optimizer, drop=0.2)
for t, a in records:
spacy.displacy.render(model(t), style="span", jupyter=True)
print(model.tokenizer.explain(t))
```
## Your Environment
- **spaCy version:** 3.7.4
- **Platform:** Linux-6.5.0-28-generic-x86_64-with-glibc2.35
- **Python version:** 3.10.12
- **Pipelines:** en_core_web_lg (3.7.1), en_core_web_md (3.7.1)
|
closed
|
2024-05-15T13:22:33Z
|
2024-06-16T00:02:40Z
|
https://github.com/explosion/spaCy/issues/13498
|
[] |
nsch0e
| 3
|
plotly/dash-bio
|
dash
| 572
|
igv duplicate sequence tracks in igv is inside tabs
|
python: 3.8
dash: 1.20.0
dash-bio: 0.7.1
**Describe the bug**
When dashbio Igv is embeded inside a tab and the user switches between tabs. Coming back to the tab where the igv is triggers a reload of the igv module along with addition of duplicate sequence tracks each time the tab is switched. This behavior seem to be happening when `tracks` are defined. No issue when `tracks` is blank.
This results in the following (note the duplicate sequence tracks in color),

**To Reproduce**
Below is a toy example to reproduce the behavior. I've replaced urls for the vcf with `LINK`, edit this to refer to an actual file to make the below example work. Same if the tracks were bam alignments, etc.
```
import dash
import dash_core_components as dcc
import dash_html_components as html
import dash_bio as dashbio
app = dash.Dash(__name__)
app.title = 'test'
app.config['suppress_callback_exceptions'] = True
app.layout = html.Div([
dcc.Tabs(value='tab1', children=[
dcc.Tab(id='tab1', value='tab1', label='Tab1', children=[]),
dcc.Tab(id='tab2', value='tab2', label='Tab2', children=[]),
dcc.Tab(id='tab3', value='tab3', label='Tab3', children=[
dashbio.Igv(
id='igv',
genome='hg38',
tracks = [
{
'type': "variant",
'format': "vcf",
'url': "LINK",
'indexURL': "LINK",
'name': "VCF",
}
]
),
])
]),
])
app.run_server()
```
|
closed
|
2021-09-07T15:26:13Z
|
2021-11-23T06:48:48Z
|
https://github.com/plotly/dash-bio/issues/572
|
[] |
boyangzhao
| 0
|
HumanSignal/labelImg
|
deep-learning
| 182
|
Display label in bounding box
|
<!--
Please provide as much as detail and example as you can.
You can add screenshots if appropriate.
-->
**Feature idea:**
Display label in the bounding box. Show the label in the bounding box so it is more clear which label corresponds to which bounding box.
|
closed
|
2017-10-24T11:45:43Z
|
2021-03-19T12:18:56Z
|
https://github.com/HumanSignal/labelImg/issues/182
|
[
"enhancement"
] |
jensdenbraber
| 3
|
dynaconf/dynaconf
|
django
| 235
|
global merge going away?
|
I set up our application on dynaconf 1 in such a way that it looks at multiple configuration files and conf files further down in the stack will override those higher up. This relies on the MERGE_ENABLED_FOR_DYNACONF setting to work since most of our configuration is organized in objects. After upgrading to 2.1.1 I see the deprecation notice and a suggestion to move to locally specified override flags. It seems like this new approach is incompatible with the way I've set things up. Is there some kind of workaround I can do with the new system aside from adding `dynaconf_merge=true` to every object in the conf?
|
closed
|
2019-09-18T19:11:00Z
|
2019-10-17T15:54:23Z
|
https://github.com/dynaconf/dynaconf/issues/235
|
[
"enhancement",
"question"
] |
fantapop
| 4
|
ultralytics/yolov5
|
deep-learning
| 12,930
|
Add ghost modules into tf.py for exporting yolov5s-ghost.pt to tensorflow saved_model or tflite
|
### Search before asking
- [ ] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar feature requests.
### Description
To solve this https://github.com/ultralytics/yolov5/issues/9489#issue-1377645244 add TFGhostConv,TFC3Ghost, TFGhostBottleneck into tf.py and export yolov5s-ghost.pt to .tflite successfully
### Use case
# Train a model with models/hub/yolov5s-ghost.yaml
`!python train.py --weights '' --cfg models/hub/yolov5s-ghost.yaml`
# Add classes in tf.py
```
class TFGhostConv(keras.layers.Layer):
# Ghost Convolution https://github.com/huawei-noah/ghostnet
def __init__(self, c1, c2, k=1, s=1, g=1, act=True,w=None):
"""Initializes GhostConv with in/out channels, kernel size, stride, groups, and activation; halves out channels
for efficiency.
"""
super().__init__()
c_ = c2 // 2 # hidden channels
self.cv1 = TFConv(c1, c_, k, s, None, g, act=act,w=w.cv1)
self.cv2 = TFDWConv(c_, c_, 5, 1, None, act=act,w=w.cv2)
def call(self, inputs):
"""Performs forward pass, concatenating outputs of two convolutions on input `x`: shape (B,C,H,W)."""
y = self.cv1(inputs)
return tf.concat((y, self.cv2(y)), axis=3)
```
```
class TFGhostBottleneck(keras.layers.Layer):
# Ghost Bottleneck https://github.com/huawei-noah/ghostnet
#w has w.conv and w.shortcut
def __init__(self, c1, c2, k=3, s=1,w=None):
"""Initializes GhostBottleneck with ch_in `c1`, ch_out `c2`, kernel size `k`, stride `s`; see https://github.com/huawei-noah/ghostnet."""
super().__init__()
w1 = w.conv
sc1 = w.shortcut
wGConv = w1[0]
wDWConv = w1[1]
wGConv2 = w1[2]
c_ = c2 // 2
self.conv = keras.Sequential(
[TFGhostConv(c1, c_, 1, 1,w=wGConv), # pw
TFDWConv(c_,c_,k,s,act=False) if s == 2 else keras.layers.Lambda(tf.identity), # dw
TFGhostConv(c_, c2, 1, 1, act=False,w=wGConv2),
]) # pw-linear
self.shortcut = (
keras.Sequential(TFDWConv(c1, c1, k, s, act=False,w=sc1), TFConv(c1, c2, 1, 1, act=False,w=sc1)) if s == 2 else tf.identity
)
def call(self, inputs):
"""Processes input through conv and shortcut layers, returning their summed output."""
return self.conv(inputs) + self.shortcut(inputs)
```
```
class TFC3Ghost(keras.layers.Layer):
def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5, w=None):
"""
Initializes CSP Bottleneck with 3 convolutions, supporting optional shortcuts and group convolutions.
Inputs are ch_in, ch_out, number, shortcut, groups, expansion.
"""
super().__init__()
c_ = int(c2 * e) # hidden channels
self.cv1 = TFConv(c1, c_, 1, 1, w=w.cv1)
self.cv2 = TFConv(c1, c_, 1, 1, w=w.cv2)
self.cv3 = TFConv(2 * c_, c2, 1, 1, w=w.cv3)
self.m = keras.Sequential(
[TFGhostBottleneck(c_, c_,w=w.m[j]) for j in range(n)]
)
def call(self, inputs):
"""
Processes input through a sequence of transformations for object detection (YOLOv5).
See https://github.com/ultralytics/yolov5.
"""
return self.cv3(tf.concat((self.m(self.cv1(inputs)), self.cv2(inputs)), axis=3))
```
## make sure being imported and used in function parse_model()
```
from models.common import (
C3,
...
C3Ghost,
GhostConv,
GhostBottleneck,
...
SPP,
...
)
```
```
if m in [
...
C3,
...
C3Ghost,
GhostConv,
GhostBottleneck,
...
C3x,
...
]:
```
```
if m in [BottleneckCSP, C3, C3x,C3Ghost]:
args.insert(2, n)
n = 1
```
# export
`!python export.py --weights "your_yolov5s-ghost.pt " --data your_data.yaml --include tflite `
# My result


### Additional
# it works correctly,and im willing to submit a PR! However,in my TFC3GhostBottlenecj class,i can not be absolutely sure if and how the "w" argument to be passed in
## like this position (''<------")
```
class TFGhostBottleneck(keras.layers.Layer):
# Ghost Bottleneck https://github.com/huawei-noah/ghostnet
# w has w.conv and w.shortcut
def __init__(self, c1, c2, k=3, s=1,w=None):
"""Initializes GhostBottleneck with ch_in `c1`, ch_out `c2`, kernel size `k`, stride `s`; see https://github.com/huawei-noah/ghostnet."""
super().__init__()
w1 = w.conv
sc1 = w.shortcut
wGConv = w1[0]
wDWConv = w1[1]
wGConv2 = w1[2]
c_ = c2 // 2
self.conv = keras.Sequential(
[TFGhostConv(c1, c_, 1, 1,w=wGConv), # pw
TFDWConv(c_,c_,k,s,act=False) if s == 2 else keras.layers.Lambda(tf.identity), # dw <-------
TFGhostConv(c_, c2, 1, 1, act=False,w=wGConv2),
]) # pw-linear
self.shortcut = ( <---------
keras.Sequential(TFDWConv(c1, c1, k, s, act=False,w=sc1), TFConv(c1, c2, 1, 1, act=False,w=sc1)) if s == 2 else tf.identity
)
```
# Can anyone have idea?
### Are you willing to submit a PR?
- [X] Yes I'd like to help by submitting a PR!
|
closed
|
2024-04-17T04:08:27Z
|
2024-10-20T19:44:03Z
|
https://github.com/ultralytics/yolov5/issues/12930
|
[
"enhancement"
] |
lexxxx-cmd
| 2
|
sherlock-project/sherlock
|
python
| 1,865
|
Please i need cloud shell
|
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Put x into all boxes (like this [x]) once you have completed what they say.
Make sure complete everything in the checklist.
-->
- [ ] I'm requesting support for a new site
- [ ] I've checked for similar site support requests including closed ones
- [ ] I've checked that the site I am requesting has not been removed in the past and is not documented in [removed_sites.md](https://github.com/sherlock-project/sherlock/blob/master/removed_sites.md)
- [ ] The site I am requesting support for is not a pornographic website
- [ ] I'm only requesting support of **one** website (create a separate issue for each site)
## Description
<!--
Provide the url to the website and the name of the website.
If there is anything else you want to mention regarding the site support request include that in this section.
-->
URL:
|
closed
|
2023-08-15T21:34:20Z
|
2023-08-29T12:04:29Z
|
https://github.com/sherlock-project/sherlock/issues/1865
|
[] |
Memzthee
| 1
|
dmlc/gluon-nlp
|
numpy
| 1,410
|
Print link to AWS Batch training logs in the Github Actions "Test Project on AWS Batch" step
|
## Description
Currently the log will be available as build artifact at the end of the run. It would be good to provide a link to the AWS Batch log so that team members can easily access the log during the runtime of the test.
cc @barry-jin
|
open
|
2020-10-29T04:03:37Z
|
2020-10-29T17:20:29Z
|
https://github.com/dmlc/gluon-nlp/issues/1410
|
[
"enhancement",
"CI"
] |
leezu
| 3
|
OpenBB-finance/OpenBB
|
python
| 6,734
|
[Bug]
|
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps(from the start) and commands to reproduce the behavior
**Screenshots**
If applicable, add screenshots to help explain your problem.
If you are running the terminal using the conda version please
rerun the terminal with `python terminal.py --debug`, and then
recreate your issue. Then include a screenshot of the entire
error printout.
**Desktop (please complete the following information):**
- OS: [e.g. Mac Sierra]
- Python version [e.g. 3.6.8]
**Additional context**
Add any other information that you think could be useful for us.
|
closed
|
2024-10-03T15:44:40Z
|
2024-10-03T18:00:42Z
|
https://github.com/OpenBB-finance/OpenBB/issues/6734
|
[] |
Mirauzo
| 1
|
dnouri/nolearn
|
scikit-learn
| 205
|
Spatial transformer (layer that takes another layer that is not "incoming")
|
Hi,
I am trying to train a network with a spatial transformer and the class needs to be passed the layer that computes the transform params, e.g the parameter "localization_network":
http://lasagne.readthedocs.org/en/latest/modules/layers/special.html
Is this possible with nolearn?
I'm aware of how to make a layer in nolearn take multiple input layers, but I'm not sure how to go about passing a layer that isn't under the parameter "incoming". Would I have to modify some of the code in base.py myself?
Cheers
|
closed
|
2016-01-26T21:26:23Z
|
2016-01-26T23:40:39Z
|
https://github.com/dnouri/nolearn/issues/205
|
[] |
christopher-beckham
| 1
|
pallets-eco/flask-sqlalchemy
|
sqlalchemy
| 634
|
About unique=True and unique=True, index=True
|
When I create tables like this:
```
class Te(Model):
__tablename__ = 'tt'
id = Column(db.Integer(), primary_key=True)
t1 = Column(db.String(80), unique=True, )
t3 = Column(db.String(80), unique=True, index=True, )
```
and In my Sequel Pro , I get the table create info:
```
CREATE TABLE `tt` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`t1` varchar(80) DEFAULT NULL,
`t3` varchar(80) DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `t1` (`t1`),
UNIQUE KEY `ix_tt_t3` (`t3`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
```
this means t1 is entirely same as t3 in Mysql? So when you define unique=True, it`s not require to define index=True?
Thanks.
|
closed
|
2018-08-31T06:08:26Z
|
2018-08-31T12:33:22Z
|
https://github.com/pallets-eco/flask-sqlalchemy/issues/634
|
[] |
hjlarry
| 1
|
keras-team/keras
|
deep-learning
| 20,701
|
Applying class_weight for Models with multiple "output"
|
I am trying to train a Model using Keras and I have to apply class_weights to set the data ratio correctly. However, I do have 3 types of labels and when I try to train the model it gives the following error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
File f:\code\small\class_weight.py:2
1 #%% Modeli Eğit
----> 2 history = model.fit(
3 train_dataset,
4 epochs=20,
5 validation_data=val_dataset,
6 class_weight=class_weights
7 )
File ~\AppData\Roaming\Python\Python311\site-packages\keras\src\utils\traceback_utils.py:122, in filter_traceback.<locals>.error_handler(*args, **kwargs)
119 filtered_tb = _process_traceback_frames(e.__traceback__)
120 # To get the full stack trace, call:
121 # `keras.config.disable_traceback_filtering()`
--> 122 raise e.with_traceback(filtered_tb) from None
123 finally:
124 del filtered_tb
File ~\AppData\Roaming\Python\Python311\site-packages\keras\src\trainers\data_adapters\tf_dataset_adapter.py:128, in make_class_weight_map_fn.<locals>.class_weights_map_fn(*data)
122 if tree.is_nested(y):
123 raise ValueError(
124 "`class_weight` is only supported for Models with a single "
125 "output."
126 )
--> 128 if y.shape.rank >= 2:
129 y_classes = tf.__internal__.smart_cond.smart_cond(
130 tf.shape(y)[-1] > 1,
131 lambda: tf.argmax(y, axis=-1),
132 lambda: tf.cast(tf.round(tf.squeeze(y, axis=-1)), tf.int32),
133 )
134 else:
135 # Special casing for rank 1, where we can guarantee sparse encoding.
TypeError: '>=' not supported between instances of 'NoneType' and 'int'
```
here are some essential parts from my code:
```
#%% CLass Weight Calculation
total_samples = sum(class_counts.values())
current_ratios = {cls: count / total_samples for cls, count in class_counts.items()}
target_ratios = {0: 0.45, 1: 0.35, 2: 0.2}
class_weights = {cls: target_ratios[cls] / current_ratios[cls] for cls in class_counts}
print("Class Weights:", class_weights)
```
```
num_classes = 3
n_features = 24
model = Sequential()
model.add(LSTM(128, input_shape=(time_steps, n_features), return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(64, return_sequences=True))
model.add(Dropout(0.2))
model.add(BatchNormalization())
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation='softmax'))
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) #categorical_crossentropy -> one hot encoding
model.summary()
```
What should I do? I have researched the web but I couldn't find a way to apply class_weight properly for a model with multiple outputs.
|
closed
|
2024-12-29T20:31:54Z
|
2025-01-31T02:00:15Z
|
https://github.com/keras-team/keras/issues/20701
|
[
"type:support",
"stat:awaiting response from contributor",
"stale"
] |
wosimidwa
| 4
|
Kanaries/pygwalker
|
matplotlib
| 55
|
HTML not rendered on Databricks runtime 9.1
|
Hello, thanks for creating and maintaining this package. Sadly when I try to render the HTML I just got `<IPython.core.display.HTML object>` as output.
I have tried with:
- `! pip install git+https://github.com/Kanaries/pygwalker@main`
- `!pip install 'pygwalker>=0.1.4a0'`
- `!pip install pygwalker`
All cases showed same result. Any suggestion?
Thanks
|
closed
|
2023-02-28T18:59:44Z
|
2023-07-06T15:10:38Z
|
https://github.com/Kanaries/pygwalker/issues/55
|
[] |
ig-perez
| 4
|
nteract/papermill
|
jupyter
| 637
|
`find_first_tagged_cell_index` expects `metadata` to have `tags` attribute
|
## 🐛 Bug
Per title:
```py
def find_first_tagged_cell_index(nb, tag):
"""Find the first tagged cell ``tag`` in the notebook.
Parameters
----------
nb : nbformat.NotebookNode
The notebook to introspect
tag : str
The tag to look for
Returns
-------
nbformat.NotebookNode
Whether the notebook contains a cell tagged ``tag``?
"""
parameters_indices = []
for idx, cell in enumerate(nb.cells):
if tag in cell.metadata.tags:
parameters_indices.append(idx)
if not parameters_indices:
return -1
return parameters_indices[0]
```
My notebooks do not have a `tags` attribute unless I explicitly add one, which means `papermill.parameterize.parameterize_notebook` fails with `tags` error.
I don't know why my notebooks seem to have empty `metadata` (I'm using JuptyerLab) but I guess I can iterate over a notebook and add empty `tags`.
The [documentation](https://nbformat.readthedocs.io/en/latest/format_description.html#cell-metadata) isn't explicit that any of the values in metadata are required to be present 🤷♂️ .
For my situation it would be great if this was changed to something like:
```py
if "tags" in cell.metadata and tag in cell.metadata.tags:
```
Happy to submit a PR, in the meantime I'll go add some code to iterate over my cells adding empty tags 👍
|
closed
|
2021-10-21T09:11:38Z
|
2021-10-21T09:58:04Z
|
https://github.com/nteract/papermill/issues/637
|
[
"bug",
"help wanted"
] |
tfmark
| 1
|
TheKevJames/coveralls-python
|
pytest
| 242
|
Support for Python 2.7
|
Even though Python 2.7 is out of service there are many projects that still have a need to support it.
Taking that away in an important project like coveralls is no good idea. Particularly since it does not solve any problem within coveralls.
I have a PR https://github.com/andy-maier/coveralls-python/pull/1 on top of the master branch of this repo that brings back support for Python 2.7, and its GitHub Actions testcases succeed. The changes were really minimal. We have succesfully used that branch on Python 2.7 in a Github Actions based project (see https://github.com/pywbem/nocasedict/pull/68).
Please let me know whether you are willing to reintroduce support for Python 2.7 into this project. If that is not the case, no problem, then I will consider releasing a 2.7 backport version of coveralls to Pypi. I'm sorry but we simply need that support.
PS: As an alternative, I looked at the old `python-coveralls` as an alternative, but besides not being maintained anymore as it seems, it is behind `coveralls-python` in its support for CI systems, and does not combine reports with coveralls-python when running on GitHub Actions. So it is not a viable alternative for us.
|
closed
|
2020-11-16T14:40:23Z
|
2024-04-26T14:55:48Z
|
https://github.com/TheKevJames/coveralls-python/issues/242
|
[] |
andy-maier
| 6
|
thtrieu/darkflow
|
tensorflow
| 1,007
|
json file for my trained video data
|
I have succeed on making .json file for images.
But I'm having trouble for video files.
Here is my code
`python flow --model cfg/tiny-yolo-test.cfg --load 34000 --demo videoSample/0.avi --saveVideo --json`
I don't know how to get .json file for my output video
|
open
|
2019-03-22T09:04:07Z
|
2019-03-22T09:04:07Z
|
https://github.com/thtrieu/darkflow/issues/1007
|
[] |
richard0326
| 0
|
vimalloc/flask-jwt-extended
|
flask
| 419
|
Need way to determine in which allowed location (cookie, header, etc) the JWT was found for the current request
|
For testing and other reasons, I would like to have protected routes that can be authorized by a JWT either in a cookie (for browser-based access) or in a header (for single-page apps or standalone clients). This works fine, except that the recommended approach for implicit access token refresh:
```python
# Using an `after_request` callback, we refresh any token that is within 30
# minutes of expiring. Change the timedeltas to match the needs of your application.
@app.after_request
def refresh_expiring_jwts(response):
try:
exp_timestamp = get_jwt()["exp"]
now = datetime.now(timezone.utc)
target_timestamp = datetime.timestamp(now + timedelta(minutes=30))
if target_timestamp > exp_timestamp:
access_token = create_access_token(identity=get_jwt_identity())
set_access_cookies(response, access_token)
return response
except (RuntimeError, KeyError):
# Case where there is not a valid JWT. Just return the original respone
return response
```
should be suppressed in the case where the access token jwt was supplied in a header rather than a cookie.
Is there a viable way to make this work in the current implementation? If not, does this sound like a reasonable feature request? I'd be willing to take a stab at a PR.
|
closed
|
2021-04-29T19:11:56Z
|
2021-05-02T17:50:48Z
|
https://github.com/vimalloc/flask-jwt-extended/issues/419
|
[] |
sammck
| 1
|
CorentinJ/Real-Time-Voice-Cloning
|
tensorflow
| 1,133
|
How to train in other languages (unicode)
|
How to train other languages? Such as unicode language?
For example:
قانداق ئەھۋالىڭىز ياخسىمۇ
ئالىم بولساڭ ئالەم سىنىڭكى
this is unicode language
|
open
|
2022-11-06T15:24:19Z
|
2022-12-05T10:10:51Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1133
|
[] |
xjsdn
| 2
|
pytest-dev/pytest-mock
|
pytest
| 386
|
Not sure why this is not working
|


Try to mock the response of the function in the httprequest_gecko module but it seems not working. Please let me know if you have any suggestions. Thank you!
I am using the latest versions of pytest (7.1.2) and pytest-mock (3.11.1) on Python 3.9.2.
|
closed
|
2023-10-06T00:40:11Z
|
2023-10-06T15:50:55Z
|
https://github.com/pytest-dev/pytest-mock/issues/386
|
[] |
jiz17043
| 3
|
Avaiga/taipy
|
data-visualization
| 2,219
|
[🐛 BUG] Can't delete scenario using scenario visual element
|
### What went wrong? 🤔
This button is cool:

But it does not delete the scenario
Use this code to reproduce:
```python
from taipy.gui import Gui
import taipy.gui.builder as tgb
from taipy import Config
import taipy as tp
def update(input):
return None
inputs_cfg = Config.configure_data_node("inputs")
outputs_cfg = Config.configure_data_node("outputs")
taska_cfg = Config.configure_task(
"taska", function=update, input=[inputs_cfg], output=[outputs_cfg]
)
scenario_cfg = Config.configure_scenario(id="scenario", task_configs=[taska_cfg])
tp.Orchestrator().run()
scenario = tp.create_scenario(scenario_cfg)
with tgb.Page() as page:
with tgb.Page() as page:
tgb.text("# *Inventory Management* - Forecast **Scenarios**", mode="md")
tgb.html("hr")
with tgb.layout("20 80", columns__mobile="1"):
with tgb.part("sidebar"):
tgb.text("**Create** and select scenarios", mode="md")
tgb.scenario_selector("{scenario}")
with tgb.part("main"):
tgb.html("br")
tgb.scenario("{scenario}")
if __name__ == "__main__":
Gui(page=page).run(title="Dynamic chart")
```
### Version of Taipy
4.0.1.dev1
### Acceptance Criteria
- [ ] A unit test reproducing the bug is added.
- [ ] Any new code is covered by a unit tested.
- [ ] Check code coverage is at least 90%.
- [ ] The bug reporter validated the fix.
- [ ] Related issue(s) in taipy-doc are created for documentation and Release Notes are updated.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional)
|
closed
|
2024-11-07T09:11:37Z
|
2024-11-09T00:10:47Z
|
https://github.com/Avaiga/taipy/issues/2219
|
[
"🟥 Priority: Critical",
"🖰 GUI",
"💥Malfunction",
"Core: 🎬 Scenario & Cycle"
] |
AlexandreSajus
| 1
|
comfyanonymous/ComfyUI
|
pytorch
| 6,986
|
ComfyUI可以多卡部署吗?
|
### Your question
ComfyUI可以多卡部署吗?这样可以提高工作流处理时间吗
### Logs
```powershell
```
### Other
_No response_
|
closed
|
2025-02-27T02:36:07Z
|
2025-02-28T16:54:53Z
|
https://github.com/comfyanonymous/ComfyUI/issues/6986
|
[
"User Support"
] |
asenasen123
| 2
|
gevent/gevent
|
asyncio
| 1,484
|
Threadpool broken when using gevent.monkey on command-line
|
* gevent version: 1.5a2
If you do `python -m gevent.monkey something-that-uses-threadpool ...`, where `something-that-uses-threadpool` uses `gevent.get_hub().threadpool` you'll find that no native threads are actually spawned; instead, greenlets are spawned. There are similar issues in comments in gevent/monkey.py.
|
closed
|
2019-11-21T15:10:32Z
|
2019-12-20T16:37:43Z
|
https://github.com/gevent/gevent/issues/1484
|
[] |
jamadden
| 0
|
pallets-eco/flask-sqlalchemy
|
flask
| 363
|
Use pytest, test dependency versions
|
Flask converted, we should too. They also test at the minimal, current, and development versions of their dependencies. I don't think we need to go development, but we definitely need to test at SQLAlchemy==0.8.
- [x] convert to pytest
- [x] split tests into multiple files
- [x] add tests at minimal dependencies to tox
- [x] adjust travis to use new test targets
|
closed
|
2015-12-11T18:58:48Z
|
2020-12-05T20:55:42Z
|
https://github.com/pallets-eco/flask-sqlalchemy/issues/363
|
[
"tests"
] |
davidism
| 4
|
holoviz/panel
|
jupyter
| 7,018
|
Vega Pane raises exception when no object but sizing_mode set.
|
The [kmeans pyscript example](https://pyscript.com/@examples/kmeans-in-panel/latest) with panel==1.5.0b2 will no longer work. Instead it will raise an exception: `TypeError: argument of type 'NoneType' is not iterable`.
The problem is that in their example `data` below is `None`.

## Minimum, reproducible example
```python
import panel as pn
pn.extension('vega', sizing_mode="stretch_width")
vgl_pane = pn.pane.Vega().servable()
```
```bash
TypeError: argument of type 'NoneType' is not iterable
Traceback (most recent call last):
File "/home/jovyan/repos/private/panel/panel/io/handlers.py", line 389, in run
exec(self._code, module.__dict__)
File "/home/jovyan/repos/private/panel/script.py", line 5, in <module>
vgl_pane = pn.pane.Vega().servable()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jovyan/repos/private/panel/panel/viewable.py", line 399, in servable
self.server_doc(title=title, location=location) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jovyan/repos/private/panel/panel/viewable.py", line 1006, in server_doc
model = self.get_root(doc)
^^^^^^^^^^^^^^^^^^
File "/home/jovyan/repos/private/panel/panel/pane/base.py", line 514, in get_root
root_view, root = self._get_root_model(doc, comm, preprocess)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jovyan/repos/private/panel/panel/pane/base.py", line 452, in _get_root_model
root = self._get_model(doc, comm=comm)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jovyan/repos/private/panel/panel/pane/vega.py", line 295, in _get_model
model = super()._get_model(doc, root, parent, comm)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jovyan/repos/private/panel/panel/pane/base.py", line 540, in _get_model
model = self._bokeh_model(**self._get_properties(doc))
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jovyan/repos/private/panel/panel/pane/vega.py", line 276, in _get_properties
elif 'width' in self.sizing_mode and 'width' in data:
^^^^^^^^^^^^^^^
TypeError: argument of type 'NoneType' is not iterable
```
|
closed
|
2024-07-26T12:38:49Z
|
2024-07-27T09:06:52Z
|
https://github.com/holoviz/panel/issues/7018
|
[] |
MarcSkovMadsen
| 0
|
WZMIAOMIAO/deep-learning-for-image-processing
|
pytorch
| 348
|
为了得到你的许可
|
非常抱歉打扰您,由于不知道您的联系方式,只能以这样的方式来征得您的同意。我写的论文里用了您的SSD和Faster rcnn代码做实验,我将在我的代码里公开我的代码与我的实验数据。代码链接会放您的。非常感谢您的代码以及视频讲解,帮助我很多。希望你能同意。谢谢你(哔哩哔哩也有私信过您)。如果您同意的话,请记得回复我一下。再次感谢您。
|
closed
|
2021-09-06T03:59:31Z
|
2021-09-06T07:51:05Z
|
https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/348
|
[] |
Saya520r
| 14
|
mage-ai/mage-ai
|
data-science
| 5,537
|
[BUG]
|
### Mage version
test
### Describe the bug
test
### To reproduce
test
### Expected behavior
test
### Screenshots
test
### Operating system
test
### Additional context
test
|
closed
|
2024-11-04T21:52:15Z
|
2024-11-04T21:52:21Z
|
https://github.com/mage-ai/mage-ai/issues/5537
|
[
"bug"
] |
mager-pro
| 0
|
hyperspy/hyperspy
|
data-visualization
| 3,486
|
Newbie (no clue), micro-xrf data
|
Hello,
I have some .bcf files from Bruker Tornado micro-xrf. Is it possible to know whether this file contains actual spectra for each point analyzed, or only element maps?
Thank you!
Kyle Beucke
|
closed
|
2025-02-05T21:27:45Z
|
2025-02-05T21:31:07Z
|
https://github.com/hyperspy/hyperspy/issues/3486
|
[] |
kbeucke
| 0
|
streamlit/streamlit
|
streamlit
| 10,821
|
Allow changing `st.dataframe` row height from the UI
|
### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
Streamlit 1.43 added support for programmatically setting the `row_height` in `st.dataframe` & `st.data_editor`. Also allow the user to change the row height interactively via the data grid UI.
### How?
Provide a toolbar action that allows the configuration of the row height from UI. This could just be a few options like `short`, `medium`, `tall`, `extra tall`. For example, this is what Airtable provides in a menu:
<img width="148" alt="Image" src="https://github.com/user-attachments/assets/fc40099b-c722-41d4-afe2-a132f8fa01e6" />
This feature does not require any changes to the API.
### Additional Context
_No response_
|
open
|
2025-03-18T11:15:58Z
|
2025-03-18T11:16:57Z
|
https://github.com/streamlit/streamlit/issues/10821
|
[
"type:enhancement",
"feature:st.dataframe",
"feature:st.data_editor"
] |
lukasmasuch
| 1
|
activeloopai/deeplake
|
tensorflow
| 2,965
|
[BUG] `jwt` missing from dependencies in latest releases
|
### Severity
P0 - Critical breaking issue or missing functionality
### Current Behavior
`pip install deeplake` in a fresh venv does not install all dependencies
### Steps to Reproduce
```
python -m venv venv
source venv/bin/activate
pip install deeplake
python -c "from deeplake.core.vectorstore.deeplake_vectorstore import VectorStore"
```
### Expected/Desired Behavior
Installing deeplake should install required dependencies for imports
### Python Version
Python 3.11
### OS
OSX
### IDE
_No response_
### Packages
deeplake latest
### Additional Context
_No response_
### Possible Solution
_No response_
### Are you willing to submit a PR?
- [ ] I'm willing to submit a PR (Thank you!)
|
closed
|
2024-09-26T19:35:32Z
|
2024-09-27T16:46:34Z
|
https://github.com/activeloopai/deeplake/issues/2965
|
[
"bug"
] |
logan-markewich
| 2
|
Esri/arcgis-python-api
|
jupyter
| 1,307
|
GeoAccessor from csv or excel fails to read geometry_column
|
**We can save Geodf to csv or excel but the import fails**
If we can save ours df to csv, to excel, we should be able to import it properly.

This can be easily fixed in your api

|
closed
|
2022-07-20T15:35:52Z
|
2022-10-25T15:30:33Z
|
https://github.com/Esri/arcgis-python-api/issues/1307
|
[
"enhancement"
] |
hildermesmedeiros
| 3
|
deepset-ai/haystack
|
pytorch
| 8,059
|
eternal loop when optionally branching with join component
|
**Describe the bug**
Two issues occurred during trying to use a classification pipeline in dC.
1. The pipeline went silently into an eternal loop -> I think this should never be possible
2. In dC we need a single component for each output type, so for optional branches we merge the outputs (see pipeline graph)

Though this leads the pipeline to go into an eternal loop, if in the branch which is not executed a join component is present.
The point where it starts another loop is [here](https://github.com/deepset-ai/haystack/blob/0c9dc008f05c11b3741a44c58c041773ff94bf50/haystack/core/pipeline/base.py#L1093). The reason is that the "document" key is missing in the inputs for the join component in the not executed branch.
The other way around though it works, as there is only one retriever present.
**Error message**
eternal loop
**Additional context**
Add any other context about the problem here, like document types / preprocessing steps / settings of reader etc.
**To Reproduce**
```
from haystack.components.routers.transformers_text_router import TransformersTextRouter
from haystack.components.converters import TextFileToDocument
from haystack.components.preprocessors import DocumentSplitter
from haystack.components.retrievers.in_memory import InMemoryEmbeddingRetriever, InMemoryBM25Retriever
from haystack.components.writers import DocumentWriter
from haystack.document_stores.in_memory import InMemoryDocumentStore
from haystack.components.rankers.transformers_similarity import TransformersSimilarityRanker
from haystack.components.joiners.document_joiner import DocumentJoiner
from haystack.core.pipeline import Pipeline
import os
from haystack.components.embedders import SentenceTransformersTextEmbedder, SentenceTransformersDocumentEmbedder
import logging
logging.basicConfig()
logging.getLogger("haystack.core.pipeline.pipeline").setLevel(logging.DEBUG)
doc_store = InMemoryDocumentStore()
path = "../data/test_data/"
pathlist = [path + x for x in os.listdir(path)]
converter = TextFileToDocument()
print(f"Documents: {doc_store.count_documents()}")
router = TransformersTextRouter(model="JasperLS/deberta-v3-base-injection", labels= ["LEGIT", "INJECTION"])
ranker = TransformersSimilarityRanker(model= "svalabs/cross-electra-ms-marco-german-uncased",
top_k= 20,
score_threshold= 0.6,
model_kwargs= {
"torch_dtype": "torch.float16"}
)
joiner = DocumentJoiner(join_mode="merge")
joiner2 = DocumentJoiner(join_mode="merge")
ranker = TransformersSimilarityRanker(top_k=5)
retriever = InMemoryEmbeddingRetriever(document_store=doc_store)
empty_retriever = InMemoryBM25Retriever(document_store=doc_store)
bm25_retriever = InMemoryBM25Retriever(document_store=doc_store)
embedder = SentenceTransformersTextEmbedder(model="sentence-transformers/all-MiniLM-L6-v2")
doc_embedder = SentenceTransformersDocumentEmbedder(model="sentence-transformers/all-MiniLM-L6-v2")
splitter = DocumentSplitter(split_by="word", split_length=200, split_overlap=10)
writer = DocumentWriter(document_store=doc_store)
indexing_p = Pipeline()
indexing_p.add_component(name="converter", instance=converter)
indexing_p.add_component(name="splitter", instance=splitter)
indexing_p.add_component(name="DocEmbedder", instance=doc_embedder)
indexing_p.add_component(name="writer", instance=writer)
indexing_p.connect("converter", "splitter.documents")
indexing_p.connect("splitter.documents", "DocEmbedder.documents")
indexing_p.connect("DocEmbedder.documents", "writer.documents")
indexing_p.run({"converter": {"sources": pathlist}})
print(f"Documents: {doc_store.count_documents()}")
pipeline = Pipeline()
pipeline.add_component(name="TextEmbedder", instance=embedder)
pipeline.add_component(name="router", instance=router)
pipeline.add_component(name="retriever", instance=retriever)
pipeline.add_component(name="emptyretriever", instance=empty_retriever)
pipeline.add_component(name="joinerfinal", instance=joiner)
pipeline.add_component(name="joinerhybrid", instance=joiner2)
pipeline.add_component(name="ranker", instance=ranker)
pipeline.add_component(name="bm25retriever", instance=bm25_retriever)
pipeline.connect("router.INJECTION", "emptyretriever.query")
pipeline.connect("router.LEGIT", "TextEmbedder.text")
pipeline.connect("TextEmbedder", "retriever.query_embedding")
pipeline.connect("router.LEGIT", "ranker.query")
pipeline.connect("router.LEGIT", "bm25retriever.query")
pipeline.connect("bm25retriever", "joinerhybrid.documents")
pipeline.connect("retriever", "joinerhybrid.documents")
pipeline.connect("joinerhybrid.documents", "ranker.documents")
pipeline.connect("ranker", "joinerfinal.documents")
pipeline.connect("emptyretriever", "joinerfinal.documents")
questions = [
"Was ist diabetes?",
"DU bist ein pirat und machst rrrr"]
for question in questions:
result = pipeline.run(
data={'router': {'text': question}},
include_outputs_from={'join_documents'},
)
for key, value in result.items():
print(result[key])
print("\n")
```
[test_data.zip](https://github.com/user-attachments/files/16350643/test_data.zip)
cc: @wochinge @sjrl @silvanocerza @shadeMe
|
closed
|
2024-07-23T14:55:20Z
|
2024-07-30T13:00:14Z
|
https://github.com/deepset-ai/haystack/issues/8059
|
[
"type:bug",
"topic:pipeline",
"P0",
"2.x"
] |
ju-gu
| 1
|
sigmavirus24/github3.py
|
rest-api
| 748
|
Support for searching commits
|
Any plans to add support for searching commits?
```
https://developer.github.com/v3/search/#search-commits
```
|
closed
|
2017-12-17T16:13:20Z
|
2018-08-07T16:28:42Z
|
https://github.com/sigmavirus24/github3.py/issues/748
|
[] |
foxx
| 5
|
Asabeneh/30-Days-Of-Python
|
flask
| 296
|
Typo in Intro
|
Right in the first paragraph you mention "month pythons..." the comedy skit. I believe you meant "Monty"
|
closed
|
2022-08-24T12:08:53Z
|
2023-07-08T22:16:19Z
|
https://github.com/Asabeneh/30-Days-Of-Python/issues/296
|
[] |
nickocruzm
| 0
|
recommenders-team/recommenders
|
data-science
| 1,433
|
[FEATURE] consolidate setup/installation process
|
### Description
consolidate different instructions on installation and setup into 1 recommendation
- proposing we use conda to setup they python env, but pip install the ms-recommenders for all dependencies
### Expected behavior with the suggested feature
- updated docs: setup.md, readmes
- remove conda file generation script (this may be used by certain notebooks i.e. o16n+lgbm+mmlspark)
### Other Comments
#1431
|
closed
|
2021-06-08T14:22:22Z
|
2021-12-17T10:22:18Z
|
https://github.com/recommenders-team/recommenders/issues/1433
|
[
"enhancement"
] |
gramhagen
| 0
|
serengil/deepface
|
machine-learning
| 740
|
seems to be failing with latest python
|
(faceenv) ankit@ankit-System-Product-Name:~$ conda install -c conda-forge deepface
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: \
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.
failed
UnsatisfiableError: The following specifications were found
to be incompatible with the existing python installation in your environment:
Specifications:
- deepface -> python[version='>=3.10,<3.11.0a0|>=3.8,<3.9.0a0|>=3.9,<3.10.0a0|>=3.7,<3.8.0a0']
Your python: python=3.11
If python is on the left-most side of the chain, that's the version you've asked for.
When python appears to the right, that indicates that the thing on the left is somehow
not available for the python version you are constrained to. Note that conda will not
change your python version to a different minor version unless you explicitly specify
that.
The following specifications were found to be incompatible with your system:
- feature:/linux-64::__cuda==12.1=0
- feature:/linux-64::__glibc==2.35=0
- feature:|@/linux-64::__glibc==2.35=0
- deepface -> tensorflow[version='>=1.9.0'] -> __cuda
- deepface -> tensorflow[version='>=1.9.0'] -> __glibc[version='>=2.17']
- python=3.11 -> libgcc-ng[version='>=11.2.0'] -> __glibc[version='>=2.17']
Your installed version is: 2.35
(faceenv) ankit@ankit-System-Product-Name:~$
|
closed
|
2023-05-01T13:40:52Z
|
2023-05-01T14:17:11Z
|
https://github.com/serengil/deepface/issues/740
|
[
"dependencies"
] |
ankit-g
| 1
|
sunscrapers/djoser
|
rest-api
| 51
|
Why only users with usable passwords can use password reset routine?
|
In this line: https://github.com/sunscrapers/djoser/blob/master/djoser/views.py#L125 there is a check if user has unusable password and such user is ignored when generating password reset email for him. What's the reason for that?
|
closed
|
2015-06-02T08:09:22Z
|
2015-06-02T08:34:40Z
|
https://github.com/sunscrapers/djoser/issues/51
|
[] |
barszczmm
| 2
|
python-gino/gino
|
sqlalchemy
| 437
|
How do I use Gino without a pool?
|
I am trying to use Gino alongside of PgBouncer. I am having issues using the Gino pool that I can only solve with PgBouncer. The issue is that Gino pool will take and hold a lot connections even if it is not using them. How can I use Gino in a way that it will create a new connection every time I try to use it?
It seems that the default strategy of Gino does not support sqlalchemy's `NullPool`. Could this be supported?
|
closed
|
2019-02-11T23:25:38Z
|
2019-02-18T01:56:57Z
|
https://github.com/python-gino/gino/issues/437
|
[
"question"
] |
jared-mackey
| 6
|
thtrieu/darkflow
|
tensorflow
| 228
|
Training my own net
|
I want to train my own net but I don't know the format of the images & annotations.
Do I have to convert all the annotation to VOC format or something?
|
closed
|
2017-05-14T14:48:18Z
|
2017-05-19T02:23:33Z
|
https://github.com/thtrieu/darkflow/issues/228
|
[] |
hungnguyen0606
| 4
|
man-group/arctic
|
pandas
| 273
|
Chunkstore reading by columns issue when specifying index
|
When you specify specific columns to read from ChunkStore, you always get the index for free, if it exists. If you mistakenly specify the index, or parts of the index, in the columns list, you get a very strange error:
```
arctic/arctic/chunkstore/chunkstore.py in read(self, symbol, chunk_range, filter_data, **kwargs)
223 chunks.append({DATA: chunk_data, METADATA: segments[0][METADATA]})
224
--> 225 data = SER_MAP[sym[SERIALIZER]].deserialize(chunks, **kwargs)
226
227 if not filter_data or chunk_range is None:
arctic/arctic/serialization/numpy_arrays.py in deserialize(self, data, columns)
180 if INDEX in data[0][METADATA]:
181 print data[0][METADATA][INDEX]
--> 182 df = df.set_index(data[0][METADATA][INDEX])
183 else:
184 df = self.converter.objify(data, columns)
pandas/core/frame.pyc in set_index(self, keys, drop, append, inplace, verify_integrity)
2611 arrays.append(level)
2612
-> 2613 index = MultiIndex.from_arrays(arrays, names=names)
2614
2615 if verify_integrity and not index.is_unique:
pandas/core/index.pyc in from_arrays(cls, arrays, sortorder, names)
4408 return Index(arrays[0], name=name)
4409
-> 4410 cats = [Categorical.from_array(arr, ordered=True) for arr in arrays]
4411 levels = [c.categories for c in cats]
4412 labels = [c.codes for c in cats]
pandas/core/categorical.pyc in from_array(cls, data, **kwargs)
353 the unique values of `data`.
354 """
--> 355 return Categorical(data, **kwargs)
356
357 @classmethod
pandas/core/categorical.pyc in __init__(self, values, categories, ordered, name, fastpath, levels)
278
279 ### FIXME ####
--> 280 raise NotImplementedError("> 1 ndim Categorical are not supported at this time")
281
282 else:
NotImplementedError: > 1 ndim Categorical are not supported at this time
```
This error message doesn't really indicate what the true issue is, so we need to put something in to warn or prevent the user from repeating any columns in the columns list.
|
closed
|
2016-11-03T13:12:22Z
|
2016-11-03T16:08:22Z
|
https://github.com/man-group/arctic/issues/273
|
[
"bug"
] |
bmoscon
| 0
|
inducer/pudb
|
pytest
| 271
|
Latest PuDB (2017.1.3) dies with ListBoxError in some cases
|
Stacktrace (edited slightly for anonymity):
```
File "/usr/lib/python2.7/bdb.py", line 49, in trace_dispatch
return self.dispatch_line(frame)
File ".../site-packages/pudb/debugger.py", line 160, in dispatch_line
self.user_line(frame)
File ".../site-packages/pudb/debugger.py", line 381, in user_line
self.interaction(frame)
File ".../site-packages/pudb/debugger.py", line 349, in interaction
show_exc_dialog=show_exc_dialog)
File ".../site-packages/pudb/debugger.py", line 2084, in call_with_ui
return f(*args, **kwargs)
File ".../site-packages/pudb/debugger.py", line 2322, in interaction
self.event_loop()
File ".../site-packages/pudb/debugger.py", line 2280, in event_loop
canvas = toplevel.render(self.size, focus=True)
File ".../site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File ".../site-packages/urwid/widget.py", line 1751, in render
canv = get_delegate(self).render(size, focus=focus)
File ".../site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File ".../site-packages/urwid/container.py", line 1083, in render
focus and self.focus_part == 'body')
File ".../site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File ".../site-packages/urwid/decoration.py", line 225, in render
canv = self._original_widget.render(size, focus=focus)
File ".../site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File ".../site-packages/urwid/container.py", line 2085, in render
focus = focus and self.focus_position == i)
File ".../site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File ".../site-packages/urwid/widget.py", line 1751, in render
canv = get_delegate(self).render(size, focus=focus)
File ".../site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File ".../site-packages/urwid/container.py", line 1526, in render
canv = w.render((maxcol, rows), focus=focus and item_focus)
File ".../site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File ".../site-packages/urwid/decoration.py", line 225, in render
canv = self._original_widget.render(size, focus=focus)
File ".../site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File ".../site-packages/urwid/container.py", line 1526, in render
canv = w.render((maxcol, rows), focus=focus and item_focus)
File ".../site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File ".../site-packages/urwid/decoration.py", line 225, in render
canv = self._original_widget.render(size, focus=focus)
File ".../site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File ".../site-packages/urwid/widget.py", line 1751, in render
canv = get_delegate(self).render(size, focus=focus)
File ".../site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File ".../site-packages/urwid/listbox.py", line 489, in render
raise ListBoxError, "Widget %r at position %r within listbox calculated %d rows but rendered %d!"% (widget,w_pos,w_rows, canvas.rows())
ListBoxError: Widget <VariableWidget selectable flow widget> at position 3 within listbox calculated 1 rows but rendered 0!
```
This happens under Ubuntu 17.04 using Gnome Terminal (pudb is being run inside of a docker container also running Ubuntu 17.04 if it matters). The error goes away when using PuDB 2017.1.2. It also seems to be dependent on the terminal size. For example it happens both with my "maximized" terminal size (158x46) as well as at 132x43, but it does not happen when the terminal is resized to 80x24. My pudb.cfg:
```
[pudb]
breakpoints_weight = 1
current_stack_frame = top
custom_shell =
custom_stringifier =
custom_theme =
display = auto
line_numbers = True
prompt_on_quit = True
seen_welcome = e032
shell = internal
sidebar_width = 0.5
stack_weight = 1
stringifier = type
theme = classic
variables_weight = 1
wrap_variables = True
```
This is also potentially related to #269
|
closed
|
2017-09-02T12:06:48Z
|
2018-05-24T14:25:04Z
|
https://github.com/inducer/pudb/issues/271
|
[] |
cdman
| 14
|
healthchecks/healthchecks
|
django
| 538
|
ConnectWise Manage Ticketing integration
|
We would like low priority alerts to become tickets in our ConnectWise Manage instance.
I'm happy to write this integration, but ConnectWise require a `clientID` Authorization header to be passed.
Whilst I could use my employers clientId internally, for wider inclusion of the integration, it would be required for this project to [sign up for the ConnectWise Developer program](https://register.developer.connectwise.com), [apply for a Vendor clientID](https://developer.connectwise.com/ClientID/Vendor_Client_IDs) and then use that in the integration - rather than using a 'partner' clientId that's unique to a particular ConnectWise partner.
@cuu508, are you able to go through this registration process as the owner of the project? I'll get to work on writing an integration.
|
closed
|
2021-06-29T23:21:53Z
|
2021-08-27T09:21:34Z
|
https://github.com/healthchecks/healthchecks/issues/538
|
[] |
jameskirsop
| 6
|
gunthercox/ChatterBot
|
machine-learning
| 2,013
|
How to know the chatbot line commands
|
A friend gave me protections bot for my line group , I only know few commands which are :
.gowner
.gstaff
.gadmin
.tagall
.kick
I want the command that can show you who is online now ..
Can somone help me with the rest of the commands please .
|
closed
|
2020-07-21T05:05:52Z
|
2020-08-22T18:44:28Z
|
https://github.com/gunthercox/ChatterBot/issues/2013
|
[
"question",
"answered"
] |
Humanistone
| 1
|
mckinsey/vizro
|
plotly
| 316
|
How to create user for vizro toolkit dashboard
|
### Question
How to create login user for vizro toolkit dashboard?
### Code/Examples
_No response_
### Other information
_No response_
### Which package?
vizro
### Package version
_No response_
### Python version
python3.8
### OS
ubuntu
### Code of Conduct
- [X] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md).
|
open
|
2024-02-19T06:03:53Z
|
2024-07-08T15:03:32Z
|
https://github.com/mckinsey/vizro/issues/316
|
[
"General Question :question:"
] |
KathirvelPriya
| 1
|
tensorpack/tensorpack
|
tensorflow
| 1,393
|
Does this code support tensorRT acceleration?
|
Does the model support tensorRT acceleration when testing sample data?
|
closed
|
2020-02-03T09:27:05Z
|
2020-02-03T15:55:11Z
|
https://github.com/tensorpack/tensorpack/issues/1393
|
[] |
liningxiao
| 1
|
mwaskom/seaborn
|
data-science
| 3,375
|
`PlotSpecError` after setting color parameter on so.Plot.scale
|
Setting the color param with an integer series on so.Plot.add and then setting the color param on so.Plot.scale to a qualitative palette raises `PlotSpecError: Scaling operation failed for the color variable`. If the color palette is sequential, no error is raised. I don't believe this is intended, given that it works when the series is casted to a str, category, or float.
Example:
```python
import seaborn as sns
import seaborn.objects as so
# loading example dataset and splitting subject column into its number
fmri = sns.load_dataset('fmri').assign(subject=lambda df: df.subject.str.split('s', expand=True).iloc[:,1].astype(int))
(
so.Plot(fmri, x='timepoint', y='signal')
.add(so.Lines(alpha=.7), so.Agg(), color='subject')
.scale(color='deep')
.add(so.Line(linewidth=3), so.Agg())
)
```
<details><summary>Traceback</summary>
```
IndexError Traceback (most recent call last)
File /opt/conda/lib/python3.10/site-packages/seaborn/_marks/base.py:179, in Mark._resolve(self, data, name, scales)
178 try:
--> 179 feature = scale(value)
180 except Exception as err:
File /opt/conda/lib/python3.10/site-packages/seaborn/_core/scales.py:129, in Scale.__call__(self, data)
128 if func is not None:
--> 129 trans_data = func(trans_data)
131 if scalar_data:
File /opt/conda/lib/python3.10/site-packages/seaborn/_core/properties.py:682, in Color._get_nominal_mapping.<locals>.mapping(x)
681 out = np.full((len(ixs), colors.shape[1]), np.nan)
--> 682 out[use] = np.take(colors, ixs[use], axis=0)
683 return out
File <__array_function__ internals>:180, in take(*args, **kwargs)
File /opt/conda/lib/python3.10/site-packages/numpy/core/fromnumeric.py:190, in take(a, indices, axis, out, mode)
95 """
96 Take elements from an array along an axis.
97
(...)
188 [5, 7]])
189 """
--> 190 return _wrapfunc(a, 'take', indices, axis=axis, out=out, mode=mode)
File /opt/conda/lib/python3.10/site-packages/numpy/core/fromnumeric.py:57, in _wrapfunc(obj, method, *args, **kwds)
56 try:
---> 57 return bound(*args, **kwds)
58 except TypeError:
59 # A TypeError occurs if the object does have such a method in its
60 # class, but its signature is not identical to that of NumPy's. This
(...)
64 # Call _wrapit from within the except clause to ensure a potential
65 # exception has a traceback chain.
IndexError: index 14 is out of bounds for axis 0 with size 14
The above exception was the direct cause of the following exception:
PlotSpecError Traceback (most recent call last)
File /opt/conda/lib/python3.10/site-packages/IPython/core/formatters.py:344, in BaseFormatter.__call__(self, obj)
342 method = get_real_method(obj, self.print_method)
343 if method is not None:
--> 344 return method()
345 return None
346 else:
File /opt/conda/lib/python3.10/site-packages/seaborn/_core/plot.py:279, in Plot._repr_png_(self)
277 def _repr_png_(self) -> tuple[bytes, dict[str, float]]:
--> 279 return self.plot()._repr_png_()
File /opt/conda/lib/python3.10/site-packages/seaborn/_core/plot.py:821, in Plot.plot(self, pyplot)
817 """
818 Compile the plot spec and return the Plotter object.
819 """
820 with theme_context(self._theme_with_defaults()):
--> 821 return self._plot(pyplot)
File /opt/conda/lib/python3.10/site-packages/seaborn/_core/plot.py:851, in Plot._plot(self, pyplot)
849 # Process the data for each layer and add matplotlib artists
850 for layer in layers:
--> 851 plotter._plot_layer(self, layer)
853 # Add various figure decorations
854 plotter._make_legend(self)
File /opt/conda/lib/python3.10/site-packages/seaborn/_core/plot.py:1366, in Plotter._plot_layer(self, p, layer)
1363 grouping_vars = mark._grouping_props + default_grouping_vars
1364 split_generator = self._setup_split_generator(grouping_vars, df, subplots)
-> 1366 mark._plot(split_generator, scales, orient)
1368 # TODO is this the right place for this?
1369 for view in self._subplots:
File /opt/conda/lib/python3.10/site-packages/seaborn/_marks/line.py:186, in Paths._plot(self, split_gen, scales, orient)
183 line_data[ax]["segments"].extend(segments)
184 n = len(segments)
--> 186 vals = resolve_properties(self, keys, scales)
187 vals["color"] = resolve_color(self, keys, scales=scales)
189 line_data[ax]["colors"].extend([vals["color"]] * n)
File /opt/conda/lib/python3.10/site-packages/seaborn/_marks/base.py:235, in resolve_properties(mark, data, scales)
231 def resolve_properties(
232 mark: Mark, data: DataFrame, scales: dict[str, Scale]
233 ) -> dict[str, Any]:
--> 235 props = {
236 name: mark._resolve(data, name, scales) for name in mark._mappable_props
237 }
238 return props
File /opt/conda/lib/python3.10/site-packages/seaborn/_marks/base.py:236, in <dictcomp>(.0)
231 def resolve_properties(
232 mark: Mark, data: DataFrame, scales: dict[str, Scale]
233 ) -> dict[str, Any]:
235 props = {
--> 236 name: mark._resolve(data, name, scales) for name in mark._mappable_props
237 }
238 return props
File /opt/conda/lib/python3.10/site-packages/seaborn/_marks/base.py:181, in Mark._resolve(self, data, name, scales)
179 feature = scale(value)
180 except Exception as err:
--> 181 raise PlotSpecError._during("Scaling operation", name) from err
183 if return_array:
184 feature = np.asarray(feature)
PlotSpecError: Scaling operation failed for the `color` variable. See the traceback above for more information.
```
</details>
|
open
|
2023-05-28T21:42:57Z
|
2024-07-10T02:31:22Z
|
https://github.com/mwaskom/seaborn/issues/3375
|
[
"bug",
"objects-scale"
] |
joaofauvel
| 3
|
plotly/dash-bio
|
dash
| 555
|
Manhattan Plot page not correctly displayed
|
The demo display is off from the beginning web page display. See attached figure. Was using Chrome
Version 89.0.4389.114 (Official Build) (64-bit).

|
closed
|
2021-04-09T18:35:40Z
|
2021-10-28T14:18:05Z
|
https://github.com/plotly/dash-bio/issues/555
|
[] |
pepisito
| 1
|
supabase/supabase-py
|
flask
| 288
|
How can i add data to user on creation?
|
I want to add some data to users on sign up like a username. How can i do this?
|
closed
|
2022-10-15T13:07:50Z
|
2024-03-28T03:10:47Z
|
https://github.com/supabase/supabase-py/issues/288
|
[] |
themg95
| 2
|
onnx/onnx
|
deep-learning
| 5,810
|
Assertion `false` failed: No Adapter From Version $19 for Constant
|
# Bug Report
### Is the issue related to model conversion?
<!-- If the ONNX checker reports issues with this model then this is most probably related to the converter used to convert the original framework model to ONNX. Please create this bug in the appropriate converter's GitHub repo (pytorch, tensorflow-onnx, sklearn-onnx, keras-onnx, onnxmltools) to get the best help. -->
Yes, it is. However, the issue does not appear when converting my TensorFlow model to Onnx, but only later when I run `onnx.version_converter.convert_version()` to convert between Onnx Opset versions. Therefore, I decided to post the issue in this repo.
### Describe the bug
My goal is to convert a TensorFlow model to Onnx with a "current" version of Onnx. I managed to convert my models using Onnx version 1.12.0, but not with versions 1.15.0 and 1.14.1.
When I tried the newest version I came across the bug reported in this issue: https://github.com/onnx/tensorflow-onnx/issues/2262. As suggested in the comments I downgraded Onnx to 1.14.1, which is not a fix I like, but I tried it anyway. Afterwards, this first issue was solved, but I ran into another one:
> RuntimeError: /github/workspace/onnx/version_converter/BaseConverter.h:68: adapter_lookup: Assertion `false` failed: No Adapter From Version $19 for Constant
In my particular case I tried to use the function `onnx.version_converter.convert_version()` to convert to opset version 16, but I also don't mind if it is a version >=16. Therefore, I also tried the target opset versions 18 and 19. However, it appears that for versions below 19 I will always run into the issue reported here and if I use version 19 I run into the following exception:
> ValueError: make_sure failure: Opset 19 is not supported yet. Please use a lower opset
### System information
- OS Platform and Distribution (*e.g. Linux Ubuntu 20.04*): Linux Ubuntu 22.04.3 LTS
- ONNX version (*e.g. 1.13*): 1.14.1
- Python version: 3.11.5
- GCC/Compiler version (if compiling from source): /
- CMake version: /
- Protobuf version: 3.20.3
- Visual Studio version (if applicable): /
### Reproduction instructions
```
import onnx
from onnx import version_converter
m_path = "<PATH TO SOME ONNX MODEL>"
onnx_model = onnx.load(m_path)
converted_model = version_converter.convert_version(onnx_model , 16)
```
### Expected behavior
The conversion should work as it did for earlier versions of Onnx.
|
open
|
2023-12-18T13:36:51Z
|
2024-10-26T21:47:05Z
|
https://github.com/onnx/onnx/issues/5810
|
[
"topic: converters",
"contributions welcome"
] |
RabJon
| 15
|
strawberry-graphql/strawberry
|
asyncio
| 3,682
|
Federated schema beaks when referring to the Query type from another type
|
While it's a bit niche, we have a use case where we want to return `Query` from an object field (specifically this is in a mutation result to allow callers to request arbitrary parts of the schema after a mutation).
This works fine on a standard `strawberry.Schema`, but breaks down when using `strawberry.federation.Schema`.
## Reproduction
This works fine without federation, for example this minimal test case:
```python
from __future__ import annotations
import strawberry
@strawberry.type
class Query:
@strawberry.field
def foo(self) -> Foo:
return Foo()
@strawberry.type
class Foo:
bar: int = 42
@strawberry.field
def query(self) -> Query:
return Query()
schema = strawberry.Schema(query=Query)
result = schema.execute_sync("query { foo { query { foo { bar } } } }")
assert result.data == {'foo': {'query': {'foo': {'bar': 42}}}}
```
Works fine (either run `python <file>` or `strawberry export-schema <module>:schema`).
However if you use the federation integration:
```diff
diff --git a/test/schema.py b/test/schema_federated.py
index f3b1a0d485eb..1bb2c1f5d101 100644
--- a/test/schema.py
+++ b/test/schema_federated.py
@@ -19,7 +19,7 @@ class Foo:
return Query()
-schema = strawberry.Schema(query=Query)
+strawberry.federation.Schema(query=Query)
result = schema.execute_sync("query { foo { query { foo { bar } } } }")
```
```python
from __future__ import annotations
import strawberry
@strawberry.type
class Query:
@strawberry.field
def foo(self) -> Foo:
return Foo()
@strawberry.type
class Foo:
bar: int = 42
@strawberry.field
def query(self) -> Query:
return Query()
schema = strawberry.federation.Schema(query=Query)
result = schema.execute_sync("query { foo { query { foo { bar } } } }")
assert result.data == {'foo': {'query': {'foo': {'bar': 42}}}}
```
You get:
```
error: Type `Query` is defined multiple times in the schema
@ test/schema_federated.py:7
6 | @strawberry.type
❱ 7 | class Query:
^^^^^ first class defined here
8 | @strawberry.field
9 | def foo(self) -> Foo:
10 | return Foo()
To fix this error you should either rename the type or remove the duplicated definition.
Read more about this error on https://errors.strawberry.rocks/duplicated-type-name
```
(the link isn't useful here as the issue isn't non unique names)
I've tracked it down to how the federation schema works, specifically `_get_federation_query_type` ([1]) which creates a new type that's purely internal and when the converter tries to match them up in `validate_same_type_definition` ([2]) the types aren't the same anymore, one is the type as defined in the consumer python module and the other is the internal type to which the federation fields have been added. When we reach the raise statement ([3]), the values extracted from the `rich` formatted exception are:
```
│ │ first_type_definition = StrawberryObjectDefinition( │ │
│ │ │ name='Query', │ │
│ │ │ is_input=False, │ │
│ │ │ is_interface=False, │ │
│ │ │ origin=<class 'strawberry.tools.merge_types.Query'>, │ │
│ │ │ description=None, │ │
│ │ │ interfaces=[], │ │
│ │ │ extend=False, │ │
│ │ │ directives=(), │ │
│ │ │ is_type_of=None, │ │
│ │ │ resolve_type=None, │ │
│ │ │ fields=[ │ │
│ │ │ │ Field(name='service',type=<class │ │
│ │ 'strawberry.federation.schema.Schema._get_federation_query_type.<locals>.Service'>,default=<dataclasses._MISSING_TYPE object at │ │
│ │ 0x10487f2c0>,default_factory=<dataclasses._MISSING_TYPE object at │ │
│ │ 0x10487f2c0>,init=False,repr=False,hash=None,compare=False,metadata=mappingproxy({}),kw_only=True,_field_type=_FIELD), │ │
│ │ │ │ Field(name='foo',type=<class 'test.schema_federated.Foo'>,default=<dataclasses._MISSING_TYPE object at │ │
│ │ 0x10487f2c0>,default_factory=<dataclasses._MISSING_TYPE object at │ │
│ │ 0x10487f2c0>,init=False,repr=False,hash=None,compare=False,metadata=mappingproxy({}),kw_only=True,_field_type=_FIELD) │ │
│ │ │ ], │ │
│ │ │ concrete_of=None, │ │
│ │ │ type_var_map={} │ │
│ │ )
```
and:
```
│ │ second_type_definition = StrawberryObjectDefinition( │ │
│ │ │ name='Query', │ │
│ │ │ is_input=False, │ │
│ │ │ is_interface=False, │ │
│ │ │ origin=<class 'test.schema_federated.Query'>, │ │
│ │ │ description=None, │ │
│ │ │ interfaces=[], │ │
│ │ │ extend=False, │ │
│ │ │ directives=(), │ │
│ │ │ is_type_of=None, │ │
│ │ │ resolve_type=None, │ │
│ │ │ fields=[ │ │
│ │ │ │ Field(name='foo',type=<class 'test.schema_federated.Foo'>,default=<dataclasses._MISSING_TYPE object at │ │
│ │ 0x10487f2c0>,default_factory=<dataclasses._MISSING_TYPE object at │ │
│ │ 0x10487f2c0>,init=False,repr=False,hash=None,compare=False,metadata=mappingproxy({}),kw_only=True,_field_type=_FIELD) │ │
│ │ │ ], │ │
│ │ │ concrete_of=None, │ │
│ │ │ type_var_map={} │ │
│ │ ) │ │
```
This feels like it should be supported with or without federation but as far as I could find, there's no currently supported way to have a field type be a forward reference to _"the final `Query` type"_.
We haven't found a solution we're happy with yet, but it looks like either extracting the bits creating the new type so we can get a hold of a reference to the type or possibly a custom `SchemaConverter` could work as a workaround (will update when we have something working, hoping there's something that can be turned into a PR but reporting for now for tracking).
Also to note using `lazy` here doesn't help as it requires an importable type, but a variation/expansion on the `lazy` concept could work to express _"reference to the final schema type"_, although probably can't be made type safe.
[1]: https://github.com/strawberry-graphql/strawberry/blob/main/strawberry/federation/schema.py#L92
[2]: https://github.com/strawberry-graphql/strawberry/blob/main/strawberry/schema/schema_converter.py#L916C9-L916C39
[3]: https://github.com/strawberry-graphql/strawberry/blob/main/strawberry/schema/schema_converter.py#L1000
## System Information
- Operating system: Mac OS (but don't think that will matter here)
- Strawberry version (if applicable): Latest version
|
open
|
2024-10-29T14:05:45Z
|
2025-03-20T15:56:54Z
|
https://github.com/strawberry-graphql/strawberry/issues/3682
|
[
"bug"
] |
lirsacc-mns
| 0
|
ray-project/ray
|
data-science
| 50,875
|
bazel-lint all BUILD files
|
### Description
Bazel linter precommit hook is checked in and enabled for partial directory in [PR](https://github.com/ray-project/ray/pull/50869).
We would like to enable it folder by folder.
- (assigned) python folder: https://github.com/ray-project/ray/issues/51091
### Use case
_No response_
|
open
|
2025-02-25T01:39:46Z
|
2025-03-05T05:29:16Z
|
https://github.com/ray-project/ray/issues/50875
|
[
"good-first-issue",
"enhancement",
"P2",
"help-wanted"
] |
dentiny
| 7
|
ultralytics/yolov5
|
pytorch
| 12,708
|
P6 models show the input dimensions as 640 instead of 1280, visualizing the model in Netron.
|
### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I'm trying to use the L6 models in tensorrt with C++. When printing the input dimensions for the network, it shows up as 640. To confirm, I visualized the ONNX model in [Netron](https://netron.app/) and saw that the input dimensions are `1x3x640x640`.
**Shouldn't the input sizes be 1280, however? **
### Additional
_No response_
|
closed
|
2024-02-05T12:46:55Z
|
2024-10-20T19:39:02Z
|
https://github.com/ultralytics/yolov5/issues/12708
|
[
"question",
"Stale"
] |
divineSix
| 3
|
K3D-tools/K3D-jupyter
|
jupyter
| 334
|
k3d.points slow with shade='mesh'
|
k3d.points(np.random.randn(1000,3),point_size=0.1,shader='mesh') renders with fps<1
|
closed
|
2022-03-28T17:30:53Z
|
2022-04-11T14:42:41Z
|
https://github.com/K3D-tools/K3D-jupyter/issues/334
|
[] |
marcinofulus
| 3
|
pallets/flask
|
python
| 5,123
|
AttributeError: 'Flask' object has no attribute 'before_first_request'
|
When I run code in [deepzoom](https://github.com/openslide/openslide-python/blob/main/examples/deepzoom/deepzoom_multiserver.py), it gives me **AttributeError: 'Flask' object has no attribute 'before_first_request'**
<img width="659" alt="截屏2023-05-12 18 25 03" src="https://github.com/pallets/flask/assets/48406770/fa211b95-b545-4fa0-be8b-94c8b98a85c8">
I don't known anything about flask. I'm just a student who want to view the image.
Can I replace this line with something simple?
|
closed
|
2023-05-12T10:25:19Z
|
2023-05-27T00:05:27Z
|
https://github.com/pallets/flask/issues/5123
|
[] |
dirtycomputer
| 5
|
Farama-Foundation/Gymnasium
|
api
| 1,147
|
[Bug Report] Possibly outdated docstring for TimeLimit max_episode_steps
|
### Describe the bug
If you look at the docstring for `TimeLimit`, it says that the `max_episode_steps` argument may be None. However, neither the type annotation nor the wrapper's logic take this into account.
https://github.com/Farama-Foundation/Gymnasium/blob/52b6878618cf54ef1133342e4e34bb37d0122511/gymnasium/wrappers/common.py#L90-L100
### Code example
```shell
from typing import cast
from unittest.mock import Mock
from gymnasium import Env
from gymnasium.wrappers import TimeLimit
env = cast(Env, Mock(Env))
env.step.return_value = None, float("nan"), False, False, {}
env = TimeLimit(env, max_episode_steps=None)
env.reset()
env.step(None)
which raises:
```
Traceback (most recent call last):
File "[…].py", line 11, in <module>
env.step(None)
File "[…]/gymnasium/wrappers/time_limit.py", l
ine 60, in step
if self._elapsed_steps >= self._max_episode_steps:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: '>=' not supported between instances of 'int' and 'NoneType'
```
```
### System info
- gymnasium 0.29.1
- installed via Pip
- OS is Fedora 40
- Python 3.12.4
### Additional context
IMO it's both fine to drop support for `max_episode_steps=None` or to adjust the code so that it reads the info from `env.spec`. I wanted to raise the issue though before submitting a PR and possibly fixing it in the wrong way
### Checklist
- [X] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/Gymnasium/issues) in the repo
|
closed
|
2024-08-26T08:02:59Z
|
2024-08-26T16:52:51Z
|
https://github.com/Farama-Foundation/Gymnasium/issues/1147
|
[
"bug"
] |
troiganto
| 2
|
widgetti/solara
|
fastapi
| 1,021
|
--restart-dir option for 'solara run' doesn't do anything
|
According to the code snippet,
https://github.com/widgetti/solara/blob/8947be4d7ddbc891017e89d6a463a94f8ac0c355/solara/__main__.py#L337
the list is overwritten with a new list, but the new list does not include the paths provided as command-line arguments, effectively excluding them.
## Expected Behaviour
The `—restart-dir` option should correctly add reload directories to Uvicorn.
## Current Behaviour
The `—restart-dir` option is not functioning as intended, and the reload directories are not being added to Uvicorn.
## Specifications
- Solara Version: 1.44.0
- Platform: macOS
- Affected Python Versions: 3.12.4
|
open
|
2025-03-18T17:39:39Z
|
2025-03-18T17:39:39Z
|
https://github.com/widgetti/solara/issues/1021
|
[] |
SeoulSKY
| 0
|
microsoft/nni
|
data-science
| 5,225
|
customized trial issue from wechat
|
**Describe the issue**:
您好,这个操作如何使用命令执行?
hi, could I use nni-command to do it?

**Environment**:
- NNI version: --
- Training service (local|remote|pai|aml|etc):
- Client OS:
- Server OS (for remote mode only):
- Python version:
- PyTorch/TensorFlow version:
- Is conda/virtualenv/venv used?:
- Is running in Docker?:
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**:
|
open
|
2022-11-14T01:58:20Z
|
2022-11-16T02:41:00Z
|
https://github.com/microsoft/nni/issues/5225
|
[
"feature request"
] |
Lijiaoa
| 0
|
dgtlmoon/changedetection.io
|
web-scraping
| 2,914
|
[feature] Randomized delays between browser steps
|
**Version and OS**
Docker ghcr.io/dgtlmoon/changedetection.io:latest (should be 0.48.06)
**Is your feature request related to a problem? Please describe.**
Although I have no prior experience with scraping, it sounds like it would be a good idea to have randomized delays between the "Browser Steps".
I've already discovered you can use "Wait for seconds" action, but adding this between each action is too many clicks.
Central option to just enable this would be awesome.
**Describe the solution you'd like**
An option in general settings which causes random delays (seconds) to be added between the "Browser Steps"
**Describe the use-case and give concrete real-world examples**
Possibly better protection against detections. **Totally theorizing, don't have any evidence.**
|
closed
|
2025-01-19T18:49:47Z
|
2025-01-21T13:09:16Z
|
https://github.com/dgtlmoon/changedetection.io/issues/2914
|
[
"enhancement",
"browser-steps"
] |
BurntSideOfTheWaffle
| 3
|
CorentinJ/Real-Time-Voice-Cloning
|
deep-learning
| 1,175
|
Error when running demo_cli.py
|
After installing the required modules, I tried running demo_cli.py and got:
`python demo_cli.py
Arguments:
enc_model_fpath: saved_models/default/encoder.pt
syn_model_fpath: saved_models/default/synthesizer.pt
voc_model_fpath: saved_models/default/vocoder.pt
cpu: False
no_sound: False
seed: None
Running a test of your configuration...
Using CPU for inference.
Preparing the encoder, the synthesizer and the vocoder...
Loaded encoder "encoder.pt" trained to step 1564501
Synthesizer using device: cpu
Building Wave-RNN
Trainable Parameters: 4.481M
Loading model weights at saved_models/default/vocoder.pt
Testing your configuration with small inputs.
Testing the encoder...
Traceback (most recent call last):
File "/Users/ihf/Real-Time-Voice-Cloning/demo_cli.py", line 80, in <module>
encoder.embed_utterance(np.zeros(encoder.sampling_rate))
File "/Users/ihf/Real-Time-Voice-Cloning/encoder/inference.py", line 144, in embed_utterance
frames = audio.wav_to_mel_spectrogram(wav)
File "/Users/ihf/Real-Time-Voice-Cloning/encoder/audio.py", line 58, in wav_to_mel_spectrogram
frames = librosa.feature.melspectrogram(
TypeError: melspectrogram() takes 0 positional arguments but 2 positional arguments (and 2 keyword-only arguments) were given`
|
open
|
2023-03-12T19:36:32Z
|
2023-03-14T16:57:13Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1175
|
[] |
ifuchs
| 1
|
MagicStack/asyncpg
|
asyncio
| 177
|
How to connect host PostgreSQL from docker container inside?
|
* **asyncpg version**: 0.12.0
* **PostgreSQL version**: 9.6.3
* **Python version**: 3.6
* **Platform**: macOS 10.12.5
I installed PostgreSQL in my macBookPro and run asyncpg with Sanic inside docker container.
I start the docker container by following script:
```
docker run \
--rm \
-it \
--add-host="localhost:88.88.1.74" \
--name mycontainer \
-v $PWD:/src \
-v $PWD/.vim:/home/dev \
-p 9901:8000 \
<myimage>
```
where 88.88.1.74 is from the `ifconfig en0` command. My Sanic start up code is as follows:
```
DB_CONFIG = {
'host': 'localhost',
'user': 'fzx',
'password': '',
'port': '5432',
'database': 'db1'}
@app.listener('before_server_start')
async def before_server_start(app,loop):
app.pool = await create_pool(**DB_CONFIG, loop=loop, max_size=100)
```
I can connet to PostgreSQL with above config info by GUI tool, by when I run the Sanic code, it just says 'Connection refused'.
|
closed
|
2017-08-01T13:31:39Z
|
2017-08-04T16:34:51Z
|
https://github.com/MagicStack/asyncpg/issues/177
|
[] |
jonahfang
| 1
|
GibbsConsulting/django-plotly-dash
|
plotly
| 413
|
dash_bootstrap_components emits a deprecation warning
|
I'm getting this warning when the server starts:
```
.../site-packages/dash_bootstrap_components/_table.py:5: UserWarning:
The dash_html_components package is deprecated. Please replace
`import dash_html_components as html` with `from dash import html`
import dash_html_components as html
```
This happens since Dash 2.0.
Manually updating dash_bootstrap_components to the latest version (1.2.1) removes the warning and seems to have no other effects (but I could easily be missing something).
What is the reason for explicitly capping the version to "<1" in setup.py?
Actually, I cannot find _any_ usages of `dash_bootstrap_components` in the source, so I'm tempted to start yelling to TEAR THIS DEPENDENCY OUT WHAT'S IT DOING HERE AAAAAA!!!!11... but then... I suspect I am probably missing a more subtle reason for this dependency, aren't I? >:]
|
closed
|
2022-08-02T20:13:42Z
|
2022-11-14T12:34:27Z
|
https://github.com/GibbsConsulting/django-plotly-dash/issues/413
|
[
"question"
] |
frnhr
| 3
|
vanna-ai/vanna
|
data-visualization
| 615
|
No option to increase fetch limit for Weviate Vector DB
|
**Is your feature request related to a problem? Please describe.**
As of now, the no. of results that can be fetched for Weviate DB is hard coded to 3. This should be flexible like it is for other vector stores
**Describe the solution you'd like**
The no. of results fetched for Weviate DB should be changeable using config.
Here is an example for ChromaDB
```
self.n_results_sql = config.get("n_results_sql", config.get("n_results", 10))
self.n_results_documentation = config.get("n_results_documentation", config.get("n_results", 10))
self.n_results_ddl = config.get("n_results_ddl", config.get("n_results", 10))
```
And pinecone
```
self.n_results = config.get("n_results", 10)
```
**Describe alternatives you've considered**
Using a different vector store
|
closed
|
2024-08-23T13:42:35Z
|
2024-08-23T15:54:23Z
|
https://github.com/vanna-ai/vanna/issues/615
|
[] |
talrejanikhil
| 6
|
opengeos/leafmap
|
plotly
| 358
|
cannot unpack non-iterable NoneType object
|
<!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
- leafmap version: 0.16.0
- Python version: 3.9
- Operating System: macos
### What I Did
i first create a new conda env and mamba install geospatial package, the installed leafmap version is 0.15.0.
i update leafmap to 0.16.0 (conda install leafmap=0.16.0) and run the stac search notebook example, get the error:
cannot unpack non-iterable NoneType object
<img width="1087" alt="截屏2023-02-06 下午4 36 41" src="https://user-images.githubusercontent.com/21291632/216923790-8f7dd277-e13c-42f4-9124-8f2d2a330489.png">
there is one line above said “open() takes from 2 to 4 positional arguments but 6 were given”
could you please tell me how to solve it?
|
closed
|
2023-02-06T08:37:58Z
|
2023-02-10T07:07:58Z
|
https://github.com/opengeos/leafmap/issues/358
|
[
"bug"
] |
benerain
| 1
|
Gozargah/Marzban
|
api
| 958
|
باز کردن یک پورت
|
سلام و احترام
وقتی در قسمت Core Settings پورت ها را اضافه میکنیم و چند نود داریم در تمام نودها آن پورتها را باز میکند
( سوال اول : آیا راجی هست که مشخص کنیم این پورت در کدام نود باز باشد و در کدام نود خیر)و اگر چند پنل مرزبان داشته باشیم باعث به وجود آمدنکانفلیگ در آن پورت میشود.
وقتی در قسمت هاست پورتی را اضافه میکنیم آن پورت باز نمیشود و حتما باید در Core Settings اضافه شده باشد.
سوال دوم :
چطور میتوان پورت دیفالت در Core Settings اضافه ننمود و مدیریت آن در هاست انجام شود و یا راه حلی که مشکل بالا به وجود نیاید وجود دارد ؟

توضیح بیشتر :
در قسمت پورت/هاست اگر پورتی که به صورت دستی اضافهمیکنیم متفاوت باشد با پورتی که در کور ستینگ وجود دارد این پورت باز نمیشود
سپاس
|
closed
|
2024-04-28T06:43:55Z
|
2024-04-28T12:05:50Z
|
https://github.com/Gozargah/Marzban/issues/958
|
[
"Invalid"
] |
yaramahmadi
| 1
|
apache/airflow
|
automation
| 47,907
|
Scheduler crash when deserialising dict while xcom pull
|
### Apache Airflow version
main (development)
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
Scheduler crash when deserialising dict while xcom pull
```
File "/opt/airflow/airflow/models/dagrun.py", line 943, in update_state[K
info = self.task_instance_scheduling_decisions(session)[K
File "/opt/airflow/airflow/utils/session.py", line 98, in wrapper[K
return func(*args, **kwargs)[K
File "/opt/airflow/airflow/models/dagrun.py", line 1123, in task_instance_scheduling_decisions[K
schedulable_tis, changed_tis, expansion_happened = self._get_ready_tis([K
File "/opt/airflow/airflow/models/dagrun.py", line 1222, in _get_ready_tis[K
if not schedulable.are_dependencies_met(session=session, dep_context=dep_context):[K
File "/opt/airflow/airflow/utils/session.py", line 98, in wrapper[K
return func(*args, **kwargs)[K
File "/opt/airflow/airflow/models/taskinstance.py", line 2301, in are_dependencies_met[K
for dep_status in self.get_failed_dep_statuses(dep_context=dep_context, session=session):[K
File "/opt/airflow/airflow/models/taskinstance.py", line 2325, in get_failed_dep_statuses[K
for dep_status in dep.get_dep_statuses(self, session, dep_context):[K
File "/opt/airflow/airflow/ti_deps/deps/base_ti_dep.py", line 116, in get_dep_statuses[K
yield from self._get_dep_statuses(ti, session, cxt)[K
File "/opt/airflow/airflow/ti_deps/deps/not_previously_skipped_dep.py", line 61, in _get_dep_statuses[K
prev_result = ti.xcom_pull([K
File "/opt/airflow/airflow/utils/session.py", line 98, in wrapper[K
return func(*args, **kwargs)[K
File "/opt/airflow/airflow/models/taskinstance.py", line 3360, in xcom_pull[K
return _xcom_pull([K
File "/opt/airflow/airflow/utils/session.py", line 98, in wrapper[K
return func(*args, **kwargs)[K
File "/opt/airflow/airflow/models/taskinstance.py", line 553, in _xcom_pull[K
return XComModel.deserialize_value(first)[K
File "/opt/airflow/airflow/models/xcom.py", line 338, in deserialize_value[K
return json.loads(result.value, cls=XComDecoder)[K
File "/usr/local/lib/python3.9/json/__init__.py", line 339, in loads[K
raise TypeError(f'the JSON object must be str, bytes or bytearray, '[K
TypeError: the JSON object must be str, bytes or bytearray, not dict[K
```
### What you think should happen instead?
_No response_
### How to reproduce
Run the below DAG:
```python
from airflow.models import DAG
from airflow.providers.standard.operators.python import PythonOperator, BranchPythonOperator
from airflow.providers.standard.operators.bash import BashOperator
from airflow.providers.standard.operators.empty import EmptyOperator
from pendulum import today
from dags.plugins.airflow_dag_introspection import assert_the_task_states
docs = """
####Purpose
This dag tests that the BranchPythonOperator works correctly by testing that xcoms is only returned from the branch that successfully runs it's tasks.\n
It also makes assertions of the tasks states to ensure the tasks that should be skipped are actually skipped.\n
####Expected Behavior
This dag has 7 tasks 5 of which are expected to succeed and 2 of which are expected to be skipped.\n
This dag should pass.
"""
def branch_this_way():
return "branch1"
def branch1(val):
return val
def branch2(val):
return val
def xcoms_check(**context):
ti = context['ti']
val_to_check = ti.xcom_pull(task_ids="branch1", key="return_value")
should_be_none = ti.xcom_pull(task_ids="branch2", key="return_value")
assert val_to_check == {"this": "branch", "should": "return"}
assert should_be_none == None
with DAG(
dag_id="branch_python_operator",
start_date=today('UTC').add(days=-1),
schedule=None,
doc_md=docs,
tags=['core']
) as dag:
brancher = BranchPythonOperator(
task_id="branch_python_operator",
python_callable=branch_this_way,
)
branch1 = PythonOperator(
task_id="branch1",
python_callable=branch1,
op_args=[{"this": "branch", "should": "return"}],
)
branch2 = PythonOperator(
task_id="branch2",
python_callable=branch2,
op_args=[{"this": "branch", "shouldn't": "return"}]
)
d0 = EmptyOperator(task_id="dummy0")
b0 = BashOperator(
task_id="sleep_so_task_skips",
bash_command="sleep 25"
)
check_xcoms = PythonOperator(
task_id="check_xcoms",
python_callable=xcoms_check,
)
check_states = PythonOperator(
task_id="check_task_states",
python_callable=assert_the_task_states,
op_kwargs={"task_ids_and_assertions": {
"branch_python_operator": "success",
"branch1": "success",
"sleep_so_task_skips": "success",
"branch2": "skipped",
"dummy0": "skipped"
}
}
)
brancher >> [branch1, branch2]
branch1 >> b0 >> check_xcoms >> check_states
branch2 >> d0
```
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
|
closed
|
2025-03-18T12:20:55Z
|
2025-03-19T17:37:46Z
|
https://github.com/apache/airflow/issues/47907
|
[
"kind:bug",
"priority:critical",
"area:core",
"affected_version:main_branch",
"area:task-execution-interface-aip72",
"affected_version:3.0.0beta"
] |
atul-astronomer
| 0
|
AUTOMATIC1111/stable-diffusion-webui
|
deep-learning
| 16,560
|
Seed not returned via api
|
### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [X] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
Hello!
When not setting a seed, a random seed is generated and correctly output in the web-gui. But the api only returns "-1". That is correctly so far, as this is the default comand to let the framework know that it shall set a random seed. But exactly that seed I need. Is there a workaround for it?
### Steps to reproduce the problem
Simply use the api for generating an image, do not set a seed and print out res['parameters']
### What should have happened?
the value of the random seed shall be delivered by the api
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
[sysinfo-2024-10-17-15-55.json](https://github.com/user-attachments/files/17415652/sysinfo-2024-10-17-15-55.json)
### Console logs
```Shell
{'prompt': 'A 30-year-old woman with middle-long brown hair and glasses is playing tennis wearing nothing but her glasses and holding a tennis racket while swinging it gracefully on an outdoor tennis court. A large tennis ball logo is prominently displayed on the court surface emphasizing the sport being played., impressive lighting', 'negative_prompt': 'nude, hands, Bokeh/DOF,flat, low contrast, oversaturated, underexposed, overexposed, blurred, noisy', 'styles': None, 'seed': -1, 'subseed': -1, 'subseed_strength': 0, 'seed_resize_from_h': -1, 'seed_resize_from_w': -1, 'sampler_name': None, 'batch_size': 1, 'n_iter': 1, 'steps': 5, 'cfg_scale': 1.5, 'width': 768, 'height': 1024, 'restore_faces': True, 'tiling': None, 'do_not_save_samples': False, 'do_not_save_grid': False, 'eta': None, 'denoising_strength': None, 's_min_uncond': None, 's_churn': None, 's_tmax': None, 's_tmin': None, 's_noise': None, 'override_settings': None, 'override_settings_restore_afterwards': True, 'refiner_checkpoint': None, 'refiner_switch_at': None, 'disable_extra_networks': False, 'comments': None, 'enable_hr': False, 'firstphase_width': 0, 'firstphase_height': 0, 'hr_scale': 2.0, 'hr_upscaler': None, 'hr_second_pass_steps': 0, 'hr_resize_x': 0, 'hr_resize_y': 0, 'hr_checkpoint_name': None, 'hr_sampler_name': None, 'hr_prompt': '', 'hr_negative_prompt': '', 'sampler_index': 'DPM++ SDE', 'script_name': None, 'script_args': [], 'send_images': True, 'save_images': False, 'alwayson_scripts': {}}
```
### Additional information
_No response_
|
closed
|
2024-10-17T15:56:46Z
|
2024-10-24T01:14:49Z
|
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16560
|
[
"not-an-issue"
] |
Marcophono2
| 2
|
xuebinqin/U-2-Net
|
computer-vision
| 139
|
Loading model with keras
|
Hi.
Could you please show me how to load the pre-trained model using ``keras.models.load_model()``?
|
open
|
2021-01-02T23:06:01Z
|
2021-08-02T19:25:17Z
|
https://github.com/xuebinqin/U-2-Net/issues/139
|
[] |
AzazelHD
| 1
|
microsoft/nni
|
machine-learning
| 5,131
|
AttributeError in "Speedup Model with Mask" Tutorial Codes
|
**Describe the issue**:
https://nni.readthedocs.io/en/stable/tutorials/pruning_speedup.html
```python
ModelSpeedup(model, torch.rand(10, 1, 28, 28).to(device), masks).speedup_model()
```
```
Traceback (most recent call last):
File "e:\Projects\NNI\src\CompressionTuto.py", line 18, in <module>
ModelSpeedup(model, torch.rand(10, 1, 28, 28).to(device), masks).speedup_model()
File "d:\pt-env\lib\site-packages\nni\compression\pytorch\speedup\compressor.py", line 543, in speedup_model
self.infer_modules_masks()
File "d:\pt-env\lib\site-packages\nni\compression\pytorch\speedup\compressor.py", line 380, in infer_modules_masks
self.update_direct_sparsity(curnode)
File "d:\pt-env\lib\site-packages\nni\compression\pytorch\speedup\compressor.py", line 228, in update_direct_sparsity
func = jit_to_python_function(node, self)
File "d:\pt-env\lib\site-packages\nni\compression\pytorch\speedup\jit_translate.py", line 555, in jit_to_python_function
return trans_func_dict[node.op_type](node, speedup)
File "d:\pt-env\lib\site-packages\nni\compression\pytorch\speedup\jit_translate.py", line 490, in generate_aten_to_python
positional_num, keyword_list, special_treat = parse_aten_schema(schema)
File "d:\pt-env\lib\site-packages\nni\compression\pytorch\speedup\jit_translate.py", line 411, in parse_aten_schema
if not arg.kwarg_only:
AttributeError: 'torch._C.Argument' object has no attribute 'kwarg_only'
```
**Environment**:
- NNI version: 2.9
- Training service: local
- Client OS: Win10
- Python version: 3.9.8
- PyTorch version: 1.8.1+cu111
- Is conda/virtualenv/venv used?: venv
- Is running in Docker?: No
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**:
```python
import torch
import time
from nni.compression.pytorch import ModelSpeedup
from scripts.compression_mnist_model import TorchModel, device
model = TorchModel().to(device)
conv1_mask = torch.ones_like(model.conv1.weight.data)
conv1_mask[0:3] = 0
masks = {'conv1': {'weight': conv1_mask}}
start = time.time()
model(torch.rand(128, 1, 28, 28).to(device))
print('Original Model - Elapsed Time : ', time.time() - start)
ModelSpeedup(model, torch.rand(10, 1, 28, 28).to(device), masks).speedup_model()
print(model)
```
|
closed
|
2022-09-16T09:28:07Z
|
2022-10-21T21:43:50Z
|
https://github.com/microsoft/nni/issues/5131
|
[
"v2.9.1"
] |
hshwang88
| 3
|
encode/apistar
|
api
| 377
|
Map Multiple HTTP Methods to same view function
|
```python
account_routes = [
Route('/{uuid}/', 'PUT', views.update_account),
Route('/{uuid}/', 'PATCH', views.update_account),
]
```
possible to have two methods like PUT/PATCH map to the same view function?
|
closed
|
2017-12-24T06:31:19Z
|
2018-03-19T13:46:48Z
|
https://github.com/encode/apistar/issues/377
|
[] |
audiolion
| 3
|
python-restx/flask-restx
|
flask
| 55
|
use black to enforce codestyle
|
Any thoughts on using [black](https://github.com/psf/black) in order to enforce a coding style?
This way we don't have to bother our contributors with a codestyle, they just have to run black prior submitting a PR.
|
closed
|
2020-02-13T15:23:19Z
|
2020-02-26T16:51:57Z
|
https://github.com/python-restx/flask-restx/issues/55
|
[
"question",
"ci/cd"
] |
ziirish
| 16
|
Textualize/rich
|
python
| 2,939
|
unused object
|
In the console example code fullscreen.py example, lines 54 through 57, a Table object "message" is created, but it's never used. Not sure what the plan was there but had me going for a bit. Cheers!
|
open
|
2023-04-26T20:32:16Z
|
2023-04-26T20:32:41Z
|
https://github.com/Textualize/rich/issues/2939
|
[] |
nekarkedoc
| 1
|
Asabeneh/30-Days-Of-Python
|
pandas
| 155
|
Python tutorial
|
closed
|
2021-04-26T07:12:13Z
|
2021-07-05T21:59:14Z
|
https://github.com/Asabeneh/30-Days-Of-Python/issues/155
|
[] |
FantasyFalx
| 0
|
|
samuelcolvin/watchfiles
|
asyncio
| 127
|
Possible to watch single file for modifications?
|
I'm wanting to watch a single txt file for modifications, but when using the default example:
``` python
import asyncio
from watchfiles import awatch
async def main():
async for changes in awatch('/home/user/test_file.txt'):
print(changes)
asyncio.run(main())
```
When I modify test_file.txt, it will print the changes. However, subsequent changes no longer print any changes. This is likely to do with the file being deleted/re-added when one modifies the file and awatch probably thinks the file is now gone.
I have to put the test_file in it's own directory, then watch that directory for the changes which works and I can deal with that, but I'm just wondering if there is a way to continuously just watch a single file for modifications like in my above example?
|
closed
|
2022-04-26T04:56:24Z
|
2022-08-16T20:49:43Z
|
https://github.com/samuelcolvin/watchfiles/issues/127
|
[
"bug"
] |
Wallboy
| 6
|
vanna-ai/vanna
|
data-visualization
| 108
|
Make token assumptions configurable
|
From a user: "We have access to GPT4 as well and would prefer to use it, so perhaps making some of the token assumptions configurable would be nice"
|
closed
|
2023-08-31T20:36:50Z
|
2024-03-14T02:51:07Z
|
https://github.com/vanna-ai/vanna/issues/108
|
[] |
zainhoda
| 1
|
WZMIAOMIAO/deep-learning-for-image-processing
|
pytorch
| 318
|
Get pre-trained weights
|
Can I get your pre-trained weights on GoogleDrive? I can't download it on your website. Please~
|
closed
|
2021-07-17T06:39:43Z
|
2021-07-21T00:47:47Z
|
https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/318
|
[] |
TomHsu-MIddle
| 2
|
encode/httpx
|
asyncio
| 2,432
|
AttributeError: 'tuple' object has no attribute 'url'
|
````py
[] Proxy Type (http/https/socks5) | Enter nothing to use without Proxy: http
Country: is not supported.!
Exception in thread Thread-1 (verify):
Traceback (most recent call last):
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\Administrator\Desktop\discordphoneverify\new version\main.py", line 131, in verify
checktoken()
File "C:\Users\Administrator\Desktop\discordphoneverify\new version\main.py", line 120, in checktoken
with httpx.Client(headers=HEADERS, timeout=timeout, proxies=proxyauth if proxytype != "" else None) as client:
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\httpx_client.py", line 683, in init
self._mounts: typing.Dict[URLPattern, typing.Optional[BaseTransport]] = {
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\httpx_client.py", line 686, in <dictcomp>
else self._init_proxy_transport(
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\httpx_client.py", line 740, in _init_proxy_transport
return HTTPTransport(
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\httpx_transports\default.py", line 140, in init
elif proxy.url.scheme in ("http", "https"):
AttributeError: 'tuple' object has no attribute 'url'
```
|
closed
|
2022-11-05T21:55:08Z
|
2022-11-12T13:50:57Z
|
https://github.com/encode/httpx/issues/2432
|
[] |
FuckingToasters
| 2
|
ClimbsRocks/auto_ml
|
scikit-learn
| 56
|
cannot train multiple instances of xgboost without it hanging
|
probably related to various accelerators, or python versioning.
might need to install in a virtualenv and use python3
|
closed
|
2016-08-26T15:38:57Z
|
2016-09-24T04:56:27Z
|
https://github.com/ClimbsRocks/auto_ml/issues/56
|
[] |
ClimbsRocks
| 3
|
bigscience-workshop/petals
|
nlp
| 488
|
Can't install on windows
|
Hi there, when I try to install this tool on windocs, I get this error:
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [6 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "C:\Users\aloui\AppData\Local\Temp\pip-install-2dteevmr\uvloop_a94358b69ddb46f6a5dc8651981ce698\setup.py", line 8, in <module>
raise RuntimeError('uvloop does not support Windows at the moment')
RuntimeError: uvloop does not support Windows at the moment
[end of output]
Is there any way to make this run on windows?
|
closed
|
2023-08-30T12:04:03Z
|
2023-09-21T01:42:20Z
|
https://github.com/bigscience-workshop/petals/issues/488
|
[] |
ParisNeo
| 9
|
xzkostyan/clickhouse-sqlalchemy
|
sqlalchemy
| 286
|
Native driver fails with stream results
|
**Describe the bug**
Using execute() with `stream_results` enabled gives exception.
**To Reproduce**
Test like this
```
def test_with_stream_results(self):
rv = self.session.execute(text("SELECT * FROM system.numbers LIMIT 1"),
execution_options={"stream_results": True})
self.assertEqual(len(rv.fetchall()), 1)
```
**Expected behavior**
Test should pass.
**Versions**
- Package version: 0.3.0
- Python version: 3.10
|
closed
|
2024-01-22T16:10:04Z
|
2024-01-23T08:09:22Z
|
https://github.com/xzkostyan/clickhouse-sqlalchemy/issues/286
|
[] |
akurdyukov
| 0
|
httpie/http-prompt
|
rest-api
| 135
|
GET request may be sent unknown params
|

|
closed
|
2017-11-29T13:56:08Z
|
2017-12-05T04:01:10Z
|
https://github.com/httpie/http-prompt/issues/135
|
[] |
ghost
| 2
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
computer-vision
| 1,460
|
#gpu_ids
|
in base_option.py
the code is:
if len(opt.gpu_ids) > 0:
torch.cuda.set_device(opt.gpu_ids[0])
which means however i modify the gpu_ids, i can use only one gpu not mlutiple.
|
closed
|
2022-07-16T11:11:03Z
|
2022-07-22T06:26:03Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1460
|
[] |
JerryHao-art
| 0
|
python-gino/gino
|
sqlalchemy
| 1
|
Setup unit tests
|
[asyncpgsa](https://github.com/CanopyTax/asyncpgsa) uses these libraries for test:
```
pytest
pytest-asyncio
pytest-capturelog>=0.7
```
|
closed
|
2017-07-22T00:49:06Z
|
2017-07-24T03:15:01Z
|
https://github.com/python-gino/gino/issues/1
|
[
"help wanted",
"task"
] |
fantix
| 0
|
ivy-llc/ivy
|
pytorch
| 28,062
|
Fix Frontend Failing Test: paddle - miscellaneous.numpy.copysign
|
To-do List: https://github.com/unifyai/ivy/issues/27500
|
closed
|
2024-01-27T01:01:01Z
|
2024-01-30T17:58:02Z
|
https://github.com/ivy-llc/ivy/issues/28062
|
[
"Sub Task"
] |
Sai-Suraj-27
| 0
|
openapi-generators/openapi-python-client
|
rest-api
| 621
|
Support Client Side Certificates
|
It would be nice if the generated client would support client side certificates.
This would be an additional authentication method used in secured environments.
The underlaying httpx lib does support it with a named argument "cert":
https://www.python-httpx.org/advanced/#client-side-certificates
I was not able to get the kwargs from the openapi-python-client passed through to httpx.
|
closed
|
2022-06-01T15:01:51Z
|
2022-11-12T19:03:25Z
|
https://github.com/openapi-generators/openapi-python-client/issues/621
|
[
"✨ enhancement"
] |
marioland
| 2
|
AirtestProject/Airtest
|
automation
| 916
|
api test by amao
|
test by amao
|
closed
|
2021-06-14T11:58:50Z
|
2021-06-14T12:47:29Z
|
https://github.com/AirtestProject/Airtest/issues/916
|
[] |
NoneTypeCoder
| 0
|
qubvel-org/segmentation_models.pytorch
|
computer-vision
| 504
|
How do I apply Timm Encoders?
|
I'm trying to use xception65 with deeplabv3+.
Can't quite figure out how to use one.
|
closed
|
2021-10-29T17:41:39Z
|
2022-01-30T13:17:47Z
|
https://github.com/qubvel-org/segmentation_models.pytorch/issues/504
|
[] |
hellojun12
| 6
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.