repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
tensorly/tensorly
|
numpy
| 99
|
Non-Negative Decompositions not available
|
I was looking at the docs and I saw there are a couple of non-negative decomposition, but when I installed tensorly I could not find them. Actually, the package looks a lot different than what I saw from the docs.
|
closed
|
2019-02-25T15:54:08Z
|
2019-02-25T16:37:03Z
|
https://github.com/tensorly/tensorly/issues/99
|
[] |
temuller
| 1
|
graphql-python/gql
|
graphql
| 463
|
[Feature] Support Subscriptions HTTP Multipart Protocol
|
Apollo recently introduced [HTTP callback protocol for GraphQL subscriptions](https://www.apollographql.com/docs/router/executing-operations/subscription-callback-protocol/) which utilizes the HTTP Multipart protocol instead of websockets.
Apollo Client [supports it out of the box](https://www.apollographql.com/docs/react/data/subscriptions#http) and provides adapters for Relay and urql.
Would be awesome if this library supported it!
Some reference code from Apollo Client:
- Relay Adapter: https://github.com/apollographql/apollo-client/blob/26fe4a57323f76ba73b6a2254c447aa967daf0f4/src/utilities/subscriptions/relay/index.ts#L17
- urql Adapter: https://github.com/apollographql/apollo-client/blob/26fe4a57323f76ba73b6a2254c447aa967daf0f4/src/utilities/subscriptions/urql/index.ts#L14
- Code for parsing HTTP Multipart: https://github.com/apollographql/apollo-client/blob/26fe4a57323f76ba73b6a2254c447aa967daf0f4/src/link/http/parseAndCheckHttpResponse.ts#L16
- Adding multi-part headers: https://github.com/apollographql/apollo-client/blob/26fe4a57323f76ba73b6a2254c447aa967daf0f4/src/link/http/createHttpLink.ts#L147
|
open
|
2024-02-05T18:02:37Z
|
2024-02-06T22:52:37Z
|
https://github.com/graphql-python/gql/issues/463
|
[
"type: feature"
] |
andrewmcgivery
| 6
|
pallets-eco/flask-sqlalchemy
|
sqlalchemy
| 688
|
Add flake8 and/or other style helpers
|
See https://github.com/pallets/werkzeug/blob/master/.pre-commit-config.yaml for reference.
|
closed
|
2019-03-08T23:56:35Z
|
2022-09-18T18:10:44Z
|
https://github.com/pallets-eco/flask-sqlalchemy/issues/688
|
[] |
rsyring
| 2
|
graphistry/pygraphistry
|
jupyter
| 367
|
[BUG] frozen ai deps for dirty-cat ci failures
|
Commit https://github.com/graphistry/pygraphistry/commit/e1be5e46d25d6224ed80f1b5c5012470c9b08212 requires, for umap/ai installs: py3.8+ (vs 3.6), scikit-learn 1.0+ (vs 0.2x), and froze dirty-cat to 0.2.0 (disallows 0.2.1)
This is to work around CI fail `FAILED graphistry/tests/test_umap_utils.py::TestUMAPFitTransform::test_allclose_fit_transform_on_same_data`
This a problem in a few ways:
- we cannot use latest dirty cat
- scikit-learn / py3.8 are relatively new, so annoying as required deps
Next step may be to file a minimal reproducible example with dirty-cat on their repo?
Ex: Show something that works in the above setup and fails in dirty-cat 0.2.1
Ex: Show something that works in the above setup and fails in dirty-cat 0.2.0 with scikit-learn 0.2x (which they aim to support afaict)
|
open
|
2022-06-23T05:38:18Z
|
2022-06-23T05:39:39Z
|
https://github.com/graphistry/pygraphistry/issues/367
|
[
"bug"
] |
lmeyerov
| 0
|
gradio-app/gradio
|
data-visualization
| 10,347
|
[ImageEditor] - tracking issue
|
The current ImageEditor work is being tackled in the following. way:
- [x] **Sizing redesign**
A number of bugs and features actually relate to sizing. We have several different ideas of what sizing means that is causing both confusion and bugs. We think of sizing in terms of container size, natural image/canvas size, display/ UI canvas/ image size.
If we implement zooming (below), we can focus on only redesigning/ reimplementing sizing considering only contain + image size, ignoring UI size.
Should resolve the following:
- #7685.
- #8556.
- #8667.
- #9305.
- #9768.
- #9889.
- #10255.
- #10265.
- [ ] **Implement zooming + panning** (which will address visible canvas size)
- #6667
- [ ] **Resize the canvas and / or the background image** (discuss on monday) - use-case: outpainting
- #6508
- #6506
- [ ] **Allow app authors set constraints on layer numbers / names** (needs API design) usecase: models with fixed requirements eg base image + mask
- #6505
- [ ] **Allow authors/ users to set opacity** (discuss on monday - what does opacity mean?)
- #6544
- [ ] #6740
- [ ] #8071
- [x] #8858
- [x] #9978
- [ ] Repro with latest, maybe be fixed - #10178
- [ ] Repro with latest, maybe be fixed - #10248
- [ ] Low prio but simple - #7586
- [ ] Testing
---
- [ ] **More layer controls.** UI only, logic already exists. Needs use-case.
- #6504
- [ ] Client issue not image editor - #9467
- [ ] **Custom buttons with associated functions** - normal generation but integrated (button in component, added to history, etc) -- discuss
|
open
|
2025-01-13T18:08:16Z
|
2025-01-24T01:26:45Z
|
https://github.com/gradio-app/gradio/issues/10347
|
[
"tracking",
"🖼️ ImageEditor"
] |
pngwn
| 5
|
ultralytics/yolov5
|
pytorch
| 13,176
|
Training with HIP/ROCm
|
### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar feature requests.
### Description
As pytorch 2.3 seems to support ROC https://pytorch.org/docs/stable/notes/hip.html
With minimal code changes (only TF32 support is not present, but it uses the same cuda device as before), there should be a way to set up ultralytics to work with a amd gpu on Linux, especially when the changes should be minimal.
### Use case
And GPUs are 2-3x cheaper and have 2-3x more VRAM than their Nvidia counterparts, the only thing holding them back is the ROCm support in ultralytics.
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR!
|
open
|
2024-07-08T21:51:50Z
|
2024-10-20T19:49:45Z
|
https://github.com/ultralytics/yolov5/issues/13176
|
[
"enhancement",
"Stale"
] |
JijaProGamer
| 3
|
predict-idlab/plotly-resampler
|
data-visualization
| 341
|
Dash Callback says FigureResampler is not JSON serializable
|
Apologies, this is more of a "this broke and I don't know what went wrong" type of issue. What it looks like so far is that everything in the dash dashboard ive made works except for the plotting. This is the exception i get:
```
dash.exceptions.InvalidCallbackReturnValue: The callback for `[<Output `data-plot.figure`>, <Output `store.data`>, <Output `status-msg.children`>]`
returned a value having type `FigureResampler`
which is not JSON serializable.
The value in question is either the only value returned,
or is in the top level of the returned list,
and has string representation
`FigureResampler({ 'data': [{'mode': 'lines',
'name': '<b style="color:sandybrown">[R]</b> Category1 <i style="color:#fc9944">~10s</i>',
'type': 'scatter',
'uid': 'c78a3bb2-658c-44d0-b791-dfc0bbe76cd8',
'x': array([datetime.datetime(2025, 1, 22, 18, 38, 21),...
```
The relevant code chunks that could cause this break is:
```
from dash import Dash, html, dcc, Output, Input, State, callback, no_update, ctx
from dash_extensions.enrich import DashProxy, ServersideOutputTransform, Serverside
import dash_bootstrap_components as dbc
import pandas as pd
import plotly.express as px
app = DashProxy(
__name__,
external_stylesheets=[dbc.themes.LUX],
transforms=[ServersideOutputTransform()],
)
# assume app creation within a dbc. container here
dcc.Graph(id="data-plot", figure=go.Figure())
# this is the callback for the function triggering the break:
@callback(
[Output("data-plot", "figure"),
Output("store", "data"), # Cache the figure data
Output("status-msg", "children")],
[Input("load-btn", "n_clicks"),
State("dropdown-1", "value"),
State("dropdown-2", "value"),
State("dropdown-3", "value"),
State("dropdown-4", "value"),
State("dropdown-5", "value")],
prevent_initial_call=True # Prevents callback from running at startup
)
# this is how i made the figure, assume its right next to the call back from above
fig = FigureResampler(go.Figure(), default_n_shown_samples=10000)
# added trace and update layout here
# and then i return fig, Serverside(fig), "this thing works"
# i also use this function to update the resampling
@app.callback(
Output("data-plot", "figure", allow_duplicate=True),
Input("data-plot", "relayoutData"),
State("store", "data"), # The server side cached FigureResampler per session
prevent_initial_call=True,
)
def update_fig(relayoutdata: dict, fig: FigureResampler):
if fig is None:
return no_update
return fig.construct_update_data_patch(relayoutdata)
```
it looks like from the docs that you can return plotly resample figures as returns to the output for dcc.Graph. what could have gone wrong?
|
closed
|
2025-03-05T20:37:21Z
|
2025-03-06T18:06:15Z
|
https://github.com/predict-idlab/plotly-resampler/issues/341
|
[] |
FDSRashid
| 1
|
ymcui/Chinese-BERT-wwm
|
tensorflow
| 25
|
模型转换失效,请求提供pytorch版本
|
可能由于操作原因,我使用pytorch-transformers进行模型转化,然而tf checkpoints转为pytorch模型后,发生模型无效果,基本等同于随机的现象,希望能够提供转换好的版本,麻烦您啦
转换命令为:
` sudo python3 convert.py --tf_checkpoint_path ./bert_model.ckpt.index --bert_config_file bert_config.json --pytorch_dump_path ./pytorch_bert.bin
`
其中convert.py的代码为:
[https://github.com/huggingface/pytorch-transformers/blob/master/pytorch_transformers/convert_tf_checkpoint_to_pytorch.py](链接)
|
closed
|
2019-08-01T03:22:11Z
|
2019-08-01T10:57:36Z
|
https://github.com/ymcui/Chinese-BERT-wwm/issues/25
|
[] |
braveryCHR
| 3
|
ray-project/ray
|
pytorch
| 51,071
|
[core] Only one of the threads in a thread pool will be initialized as a long-running Python thread
|
### What happened + What you expected to happen
Currently, only one of the threads in a thread pool will be initialized as a long-running Python thread. I should also investigate whether it's possible to call `PyGILState_Release` on a different thread other than the one calls `PyGILState_Ensure` in the thread pool.
### Versions / Dependencies
TODO
### Reproduction script
TODO
### Issue Severity
None
|
open
|
2025-03-04T22:03:14Z
|
2025-03-04T23:02:20Z
|
https://github.com/ray-project/ray/issues/51071
|
[
"bug",
"core"
] |
kevin85421
| 0
|
flaskbb/flaskbb
|
flask
| 601
|
Attachments
|
Add support for uploading files and pictures
|
open
|
2021-09-10T18:20:59Z
|
2021-09-10T18:20:59Z
|
https://github.com/flaskbb/flaskbb/issues/601
|
[] |
sh4nks
| 0
|
comfyanonymous/ComfyUI
|
pytorch
| 6,511
|
SamplerCustomAdvanced: The size of tensor a (3072) must match the size of tensor b (4096) at non-singleton dimension 2
|
### Your question
**I haven't been able to find any solution so far. I got this error when trying to use the following workflow: https://civitai.com/models/929131/flux-pulid-face-swap-inpainting-consistent-character-workflow**
# ComfyUI Error Report
## Error Details
- **Node ID:** 48
- **Node Type:** SamplerCustomAdvanced
- **Exception Type:** RuntimeError
- **Exception Message:** The size of tensor a (3072) must match the size of tensor b (4096) at non-singleton dimension 2
## Stack Trace
```
File "D:\IAs\pinokio\api\comfy.git\app\execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "D:\IAs\pinokio\api\comfy.git\app\execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "D:\IAs\pinokio\api\comfy.git\app\execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "D:\IAs\pinokio\api\comfy.git\app\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "D:\IAs\pinokio\api\comfy.git\app\comfy_extras\nodes_custom_sampler.py", line 633, in sample
samples = guider.sample(noise.generate_noise(latent), latent_image, sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise.seed)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\samplers.py", line 985, in sample
output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\patcher_extension.py", line 111, in execute
return self.wrappers[self.idx](self, *args, **kwargs)
File "D:\IAs\pinokio\api\comfy.git\app\custom_nodes\comfyui_pulid_flux_ll\pulidflux.py", line 380, in pulid_outer_sample_wrappers_with_override
out = wrapper_executor(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\patcher_extension.py", line 103, in __call__
return new_executor.execute(*args, **kwargs)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\patcher_extension.py", line 110, in execute
return self.original(*args, **kwargs)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\samplers.py", line 953, in outer_sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\samplers.py", line 936, in inner_sample
samples = executor.execute(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\patcher_extension.py", line 110, in execute
return self.original(*args, **kwargs)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\samplers.py", line 715, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
File "D:\IAs\pinokio\api\comfy.git\app\env\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\k_diffusion\sampling.py", line 162, in sample_euler
denoised = model(x, sigma_hat * s_in, **extra_args)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\samplers.py", line 380, in __call__
out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\samplers.py", line 916, in __call__
return self.predict_noise(*args, **kwargs)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\samplers.py", line 919, in predict_noise
return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\samplers.py", line 360, in sampling_function
out = calc_cond_batch(model, conds, x, timestep, model_options)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\samplers.py", line 196, in calc_cond_batch
return executor.execute(model, conds, x_in, timestep, model_options)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\patcher_extension.py", line 110, in execute
return self.original(*args, **kwargs)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\samplers.py", line 309, in _calc_cond_batch
output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\model_base.py", line 131, in apply_model
return comfy.patcher_extension.WrapperExecutor.new_class_executor(
File "D:\IAs\pinokio\api\comfy.git\app\comfy\patcher_extension.py", line 111, in execute
return self.wrappers[self.idx](self, *args, **kwargs)
File "D:\IAs\pinokio\api\comfy.git\app\custom_nodes\comfyui_pulid_flux_ll\pulidflux.py", line 404, in pulid_apply_model_wrappers
out = wrapper_executor(x, t, c_concat, c_crossattn, control, transformer_options, **kwargs)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\patcher_extension.py", line 103, in __call__
return new_executor.execute(*args, **kwargs)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\patcher_extension.py", line 110, in execute
return self.original(*args, **kwargs)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\model_base.py", line 160, in _apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
File "D:\IAs\pinokio\api\comfy.git\app\env\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\IAs\pinokio\api\comfy.git\app\env\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\ldm\flux\model.py", line 204, in forward
out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control, transformer_options, attn_mask=kwargs.get("attention_mask", None))
File "D:\IAs\pinokio\api\comfy.git\app\custom_nodes\comfyui_pulid_flux_ll\PulidFluxHook.py", line 189, in pulid_forward_orig
out = blocks_replace[("double_block", i)]({"img": img,
File "D:\IAs\pinokio\api\comfy.git\app\custom_nodes\comfyui_pulid_flux_ll\PulidFluxHook.py", line 74, in __call__
img = img + callback(temp_img,
File "D:\IAs\pinokio\api\comfy.git\app\custom_nodes\comfyui_pulid_flux_ll\PulidFluxHook.py", line 47, in pulid_patch
pulid_img = pulid_img * mask
```
## System Information
- **ComfyUI Version:** 0.3.10
- **Arguments:** main.py
- **OS:** nt
- **Python Version:** 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:07:43) [MSC v.1942 64 bit (AMD64)]
- **Embedded Python:** false
- **PyTorch Version:** 2.5.1+cu121
## Devices
- **Name:** cuda:0 NVIDIA GeForce RTX 3060 : cudaMallocAsync
- **Type:** cuda
- **VRAM Total:** 12884246528
- **VRAM Free:** 1591319168
- **Torch VRAM Total:** 9160359936
- **Torch VRAM Free:** 69835392
## Logs
```
2025-01-18T10:15:25.820057 - ** Python version:2025-01-18T10:15:25.820057 - 2025-01-18T10:15:25.821060 - 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:07:43) [MSC v.1942 64 bit (AMD64)]2025-01-18T10:15:25.821060 -
2025-01-18T10:15:25.821060 - ** Python executable:2025-01-18T10:15:25.821060 - 2025-01-18T10:15:25.821060 - D:\IAs\pinokio\api\comfy.git\app\env\Scripts\python.exe2025-01-18T10:15:25.821060 -
2025-01-18T10:15:25.821060 - ** ComfyUI Path:2025-01-18T10:15:25.822069 - 2025-01-18T10:15:25.822069 - D:\IAs\pinokio\api\comfy.git\app2025-01-18T10:15:25.822069 -
2025-01-18T10:15:25.822069 - ** User directory:2025-01-18T10:15:25.822553 - 2025-01-18T10:15:25.822553 - D:\IAs\pinokio\api\comfy.git\app\user2025-01-18T10:15:25.822553 -
2025-01-18T10:15:25.822553 - ** ComfyUI-Manager config path:2025-01-18T10:15:25.822553 - 2025-01-18T10:15:25.822553 - D:\IAs\pinokio\api\comfy.git\app\user\default\ComfyUI-Manager\config.ini2025-01-18T10:15:25.822553 -
2025-01-18T10:15:25.822553 - ** Log path:2025-01-18T10:15:25.822553 - 2025-01-18T10:15:25.822553 - D:\IAs\pinokio\api\comfy.git\app\user\comfyui.log2025-01-18T10:15:25.822553 -
2025-01-18T10:15:37.664394 -
Prestartup times for custom nodes:
2025-01-18T10:15:37.664394 - 0.0 seconds: D:\IAs\pinokio\api\comfy.git\app\custom_nodes\rgthree-comfy
2025-01-18T10:15:37.664394 - 47.6 seconds: D:\IAs\pinokio\api\comfy.git\app\custom_nodes\ComfyUI-Manager
2025-01-18T10:15:37.664394 -
2025-01-18T10:15:59.349756 - Total VRAM 12287 MB, total RAM 16354 MB
2025-01-18T10:15:59.350752 - pytorch version: 2.5.1+cu121
2025-01-18T10:15:59.351756 - Set vram state to: NORMAL_VRAM
2025-01-18T10:15:59.352751 - Device: cuda:0 NVIDIA GeForce RTX 3060 : cudaMallocAsync
2025-01-18T10:16:10.086733 - Using pytorch attention
2025-01-18T10:16:25.542982 - [Prompt Server] web root: D:\IAs\pinokio\api\comfy.git\app\web
2025-01-18T10:16:54.698536 - Total VRAM 12287 MB, total RAM 16354 MB
2025-01-18T10:16:54.698536 - pytorch version: 2.5.1+cu121
2025-01-18T10:16:54.700539 - Set vram state to: NORMAL_VRAM
2025-01-18T10:16:54.700539 - Device: cuda:0 NVIDIA GeForce RTX 3060 : cudaMallocAsync
2025-01-18T10:16:54.990056 - ### Loading: ComfyUI-Manager (V3.7.4)
2025-01-18T10:16:55.337528 - ### ComfyUI Version: v0.3.10-53-g3aaabb1 | Released on '2025-01-14'
2025-01-18T10:16:56.108602 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
2025-01-18T10:16:56.174705 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
2025-01-18T10:16:56.208702 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
2025-01-18T10:16:56.358871 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
2025-01-18T10:16:56.500767 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
2025-01-18T10:16:56.503847 - FETCH DATA from: https://api.comfy.org/nodes?page=1&limit=10002025-01-18T10:16:56.503847 - 2025-01-18T10:17:01.040195 - D:\IAs\pinokio\api\comfy.git\app\env\lib\site-packages\albumentations\__init__.py:13: UserWarning: A new version of Albumentations is available: 2.0.0 (you have 1.4.15). Upgrade using: pip install -U albumentations. To disable automatic update checks, set the environment variable NO_ALBUMENTATIONS_UPDATE to 1.
check_for_updates()
2025-01-18T10:17:06.733037 - [DONE]2025-01-18T10:17:06.734038 -
2025-01-18T10:17:07.635148 - [ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes?page=1&limit=10002025-01-18T10:17:07.635148 -
2025-01-18T10:17:07.690995 - nightly_channel: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/cache
2025-01-18T10:17:07.691997 - FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json2025-01-18T10:17:07.691997 - 2025-01-18T10:17:07.760287 - [DONE]2025-01-18T10:17:07.760287 -
2025-01-18T10:17:10.531461 - Please 'pip install xformers'2025-01-18T10:17:10.531461 -
2025-01-18T10:17:10.535665 - Nvidia APEX normalization not installed, using PyTorch LayerNorm2025-01-18T10:17:10.535665 -
2025-01-18T10:17:12.166283 - D:\IAs\pinokio\api\comfy.git\app\custom_nodes\ComfyUI-segment-anything-2\sam2\modeling\sam\transformer.py:20: UserWarning: Flash Attention is disabled as it requires a GPU with Ampere (8.0) CUDA capability.
OLD_GPU, USE_FLASH_ATTN, MATH_KERNEL_ON = get_sdpa_settings()
2025-01-18T10:17:15.914860 - D:\IAs\pinokio\api\comfy.git\app2025-01-18T10:17:15.914860 -
2025-01-18T10:17:15.915872 - ############################################2025-01-18T10:17:15.915872 -
2025-01-18T10:17:15.915872 - D:\IAs\pinokio\api\comfy.git\app\custom_nodes\ComfyUI-NAI-styler\CSV2025-01-18T10:17:15.915872 -
2025-01-18T10:17:15.915872 - ############################################2025-01-18T10:17:15.915872 -
2025-01-18T10:17:15.916870 - []2025-01-18T10:17:15.916870 -
2025-01-18T10:17:15.916870 - ############################################2025-01-18T10:17:15.916870 -
2025-01-18T10:17:16.647792 - ------------------------------------------2025-01-18T10:17:16.648805 -
2025-01-18T10:17:16.648805 - [34mComfyroll Studio v1.76 : [92m 175 Nodes Loaded[0m2025-01-18T10:17:16.648805 -
2025-01-18T10:17:16.649883 - ------------------------------------------2025-01-18T10:17:16.649883 -
2025-01-18T10:17:16.649883 - ** For changes, please see patch notes at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/blob/main/Patch_Notes.md2025-01-18T10:17:16.649883 -
2025-01-18T10:17:16.649883 - ** For help, please see the wiki at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/wiki2025-01-18T10:17:16.650893 -
2025-01-18T10:17:16.650893 - ------------------------------------------2025-01-18T10:17:16.650893 -
2025-01-18T10:17:16.801369 - [36;20m[comfyui_controlnet_aux] | INFO -> Using ckpts path: D:\IAs\pinokio\api\comfy.git\app\custom_nodes\comfyui_controlnet_aux\ckpts[0m
2025-01-18T10:17:16.802382 - [36;20m[comfyui_controlnet_aux] | INFO -> Using symlinks: False[0m
2025-01-18T10:17:16.802382 - [36;20m[comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider'][0m
2025-01-18T10:17:17.314833 - DWPose: Onnxruntime with acceleration providers detected2025-01-18T10:17:17.314833 -
2025-01-18T10:17:18.111459 - # 😺dzNodes: LayerStyle -> [1;33mCannot import name 'guidedFilter' from 'cv2.ximgproc'
A few nodes cannot works properly, while most nodes are not affected. Please REINSTALL package 'opencv-contrib-python'.
For detail refer to [4mhttps://github.com/chflame163/ComfyUI_LayerStyle/issues/5[0m[m2025-01-18T10:17:18.111459 -
2025-01-18T10:17:21.359019 - D:\IAs\pinokio\api\comfy.git\app2025-01-18T10:17:21.360030 -
2025-01-18T10:17:21.360030 - ############################################2025-01-18T10:17:21.360030 -
2025-01-18T10:17:21.361014 - D:\IAs\pinokio\api\comfy.git\app\custom_nodes\ComfyUI-NAI-styler\CSV2025-01-18T10:17:21.361014 -
2025-01-18T10:17:21.361014 - ############################################2025-01-18T10:17:21.361014 -
2025-01-18T10:17:21.362014 - []2025-01-18T10:17:21.362014 -
2025-01-18T10:17:21.362014 - ############################################2025-01-18T10:17:21.362014 -
2025-01-18T10:17:21.545250 - Please 'pip install xformers'2025-01-18T10:17:21.545250 -
2025-01-18T10:17:21.552323 - Nvidia APEX normalization not installed, using PyTorch LayerNorm2025-01-18T10:17:21.552479 -
2025-01-18T10:17:22.404586 -
2025-01-18T10:17:22.404586 - [92m[rgthree-comfy] Loaded 42 fantastic nodes. 🎉[00m2025-01-18T10:17:22.404586 -
2025-01-18T10:17:22.405596 -
2025-01-18T10:17:22.423910 -
Import times for custom nodes:
2025-01-18T10:17:22.423910 - 0.0 seconds: D:\IAs\pinokio\api\comfy.git\app\custom_nodes\websocket_image_save.py
2025-01-18T10:17:22.424918 - 0.0 seconds: D:\IAs\pinokio\api\comfy.git\app\custom_nodes\cg-use-everywhere
2025-01-18T10:17:22.424918 - 0.1 seconds: D:\IAs\pinokio\api\comfy.git\app\custom_nodes\ComfyUI_NAI-mod
2025-01-18T10:17:22.425920 - 0.1 seconds: D:\IAs\pinokio\api\comfy.git\app\custom_nodes\ComfyUI-Universal-Styler
2025-01-18T10:17:22.426920 - 0.1 seconds: D:\IAs\pinokio\api\comfy.git\app\custom_nodes\ComfyUI_bitsandbytes_NF4-Lora
2025-01-18T10:17:22.426920 - 0.1 seconds: D:\IAs\pinokio\api\comfy.git\app\custom_nodes\ComfyUI-GGUF
2025-01-18T10:17:22.427920 - 0.1 seconds: D:\IAs\pinokio\api\comfy.git\app\custom_nodes\comfy-image-saver
2025-01-18T10:17:22.427920 - 0.3 seconds: D:\IAs\pinokio\api\comfy.git\app\custom_nodes\comfyui-florence2
2025-01-18T10:17:22.428920 - 0.3 seconds: D:\IAs\pinokio\api\comfy.git\app\custom_nodes\comfyui-custom-scripts
2025-01-18T10:17:22.428920 - 0.3 seconds: D:\IAs\pinokio\api\comfy.git\app\custom_nodes\comfyui-kjnodes
2025-01-18T10:17:22.429921 - 0.4 seconds: D:\IAs\pinokio\api\comfy.git\app\custom_nodes\rgthree-comfy
2025-01-18T10:17:22.429921 - 0.6 seconds: D:\IAs\pinokio\api\comfy.git\app\custom_nodes\comfyui_pulid_flux_ll
2025-01-18T10:17:22.429921 - 0.7 seconds: D:\IAs\pinokio\api\comfy.git\app\custom_nodes\ComfyUI_Comfyroll_CustomNodes
2025-01-18T10:17:22.430921 - 0.9 seconds: D:\IAs\pinokio\api\comfy.git\app\custom_nodes\comfyui_controlnet_aux
2025-01-18T10:17:22.430921 - 0.9 seconds: D:\IAs\pinokio\api\comfy.git\app\custom_nodes\ComfyUI-Manager
2025-01-18T10:17:22.430921 - 1.3 seconds: D:\IAs\pinokio\api\comfy.git\app\custom_nodes\ComfyUI-segment-anything-2
2025-01-18T10:17:22.432045 - 1.8 seconds: D:\IAs\pinokio\api\comfy.git\app\custom_nodes\ComfyUI-FluxTrainer
2025-01-18T10:17:22.432045 - 2.5 seconds: D:\IAs\pinokio\api\comfy.git\app\custom_nodes\comfyui-tensorops
2025-01-18T10:17:22.432045 - 3.8 seconds: D:\IAs\pinokio\api\comfy.git\app\custom_nodes\comfyui_layerstyle
2025-01-18T10:17:22.433045 - 5.8 seconds: D:\IAs\pinokio\api\comfy.git\app\custom_nodes\comfyui-advancedliveportrait
2025-01-18T10:17:22.433045 - 11.9 seconds: D:\IAs\pinokio\api\comfy.git\app\custom_nodes\ComfyUI-eesahesNodes
2025-01-18T10:17:22.433045 - 16.1 seconds: D:\IAs\pinokio\api\comfy.git\app\custom_nodes\ComfyUI-PuLID-Flux-Enhanced
2025-01-18T10:17:22.433045 -
2025-01-18T10:17:22.475782 - Starting server
2025-01-18T10:17:22.476782 - To see the GUI go to: http://127.0.0.1:8188
2025-01-18T10:18:07.802704 - FETCH DATA from: D:\IAs\pinokio\api\comfy.git\app\custom_nodes\ComfyUI-Manager\extension-node-map.json2025-01-18T10:18:07.820633 - 2025-01-18T10:18:07.857730 - [DONE]2025-01-18T10:18:07.873378 -
2025-01-18T10:18:08.424136 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".
Your current root directory is: D:\IAs\pinokio\api\comfy.git\app
2025-01-18T10:18:08.424136 -
2025-01-18T10:18:08.424136 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".
Your current root directory is: D:\IAs\pinokio\api\comfy.git\app
2025-01-18T10:18:08.424136 -
2025-01-18T10:18:08.424136 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".
Your current root directory is: D:\IAs\pinokio\api\comfy.git\app
2025-01-18T10:18:08.424136 -
2025-01-18T10:18:08.424136 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".
Your current root directory is: D:\IAs\pinokio\api\comfy.git\app
2025-01-18T10:18:08.424136 -
2025-01-18T10:18:08.424136 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".
Your current root directory is: D:\IAs\pinokio\api\comfy.git\app
2025-01-18T10:18:08.424136 -
2025-01-18T10:18:08.424136 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".
Your current root directory is: D:\IAs\pinokio\api\comfy.git\app
2025-01-18T10:18:08.424136 -
2025-01-18T10:18:24.917482 - got prompt
2025-01-18T10:18:25.661916 - Using pytorch attention in VAE
2025-01-18T10:18:25.664027 - Using pytorch attention in VAE
2025-01-18T10:18:36.106181 - VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
2025-01-18T10:18:36.201755 - Florence2 using sdpa for attention2025-01-18T10:18:36.201755 -
2025-01-18T10:18:37.039675 - No flash_attn import to remove2025-01-18T10:18:37.039675 -
2025-01-18T10:18:39.871680 - Florence2LanguageForConditionalGeneration has generative capabilities, as `prepare_inputs_for_generation` is explicitly overwritten. However, it doesn't directly inherit from `GenerationMixin`. From 👉v4.50👈 onwards, `PreTrainedModel` will NOT inherit from `GenerationMixin`, and this model will lose the ability to call `generate` and other related functions.
- If you're using `trust_remote_code=True`, you can get rid of this warning by loading the model with an auto class. See https://huggingface.co/docs/transformers/en/model_doc/auto#auto-classes
- If you are the owner of the model architecture code, please modify your model class such that it inherits from `GenerationMixin` (after `PreTrainedModel`, otherwise you'll get an exception).
- If you are not the owner of the model architecture class, please contact the model code owner to update it.
2025-01-18T10:23:12.496603 - </s><s><s><s> face<loc_378><loc_89><loc_530><loc_284></s>2025-01-18T10:23:12.496603 -
2025-01-18T10:23:12.554196 - match index:2025-01-18T10:23:12.554196 - 2025-01-18T10:23:12.554196 - 02025-01-18T10:23:12.555196 - 2025-01-18T10:23:12.555196 - in mask_indexes:2025-01-18T10:23:12.555196 - 2025-01-18T10:23:12.555196 - ['0']2025-01-18T10:23:12.555196 -
2025-01-18T10:23:13.380452 - Offloading model...2025-01-18T10:23:13.380452 -
2025-01-18T10:23:15.093567 - [[[387.5840148925781, 91.64800262451172, 543.2320556640625, 291.3280029296875]]]2025-01-18T10:23:15.094574 -
2025-01-18T10:23:15.095078 - Type of data:2025-01-18T10:23:15.095802 - 2025-01-18T10:23:15.095802 - <class 'list'>2025-01-18T10:23:15.095802 -
2025-01-18T10:23:15.095802 - Data:2025-01-18T10:23:15.095802 - 2025-01-18T10:23:15.096818 - [[[387.5840148925781, 91.64800262451172, 543.2320556640625, 291.3280029296875]]]2025-01-18T10:23:15.096818 -
2025-01-18T10:23:15.096818 - Indexes:2025-01-18T10:23:15.096818 - 2025-01-18T10:23:15.097820 - [0]2025-01-18T10:23:15.097820 -
2025-01-18T10:23:15.097820 - Coordinates:2025-01-18T10:23:15.097820 - 2025-01-18T10:23:15.099230 - [{"x": 465, "y": 191}]2025-01-18T10:23:15.099230 -
2025-01-18T10:23:15.101232 - model_path: 2025-01-18T10:23:15.101232 - 2025-01-18T10:23:15.101232 - D:\IAs\pinokio\api\comfy.git\app\models\sam2\sam2.1_hiera_large-fp16.safetensors2025-01-18T10:23:15.101232 -
2025-01-18T10:23:15.102234 - Using model config: D:\IAs\pinokio\api\comfy.git\app\custom_nodes\ComfyUI-segment-anything-2\sam2_configs\sam2.1_hiera_l.yaml2025-01-18T10:23:15.102234 -
2025-01-18T10:23:53.226968 -
Processing Images: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.73s/it]2025-01-18T10:23:53.672095 -
Processing Images: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:03<00:00, 3.18s/it]2025-01-18T10:23:53.673096 -
2025-01-18T10:23:53.992905 - Requested to load AutoencodingEngine
2025-01-18T10:23:54.109295 - loaded completely 9.5367431640625e+25 159.87335777282715 True
2025-01-18T10:24:00.671317 - model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16
2025-01-18T10:24:00.673318 - model_type FLUX
2025-01-18T10:33:41.971369 - CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
2025-01-18T10:33:54.440599 - clip missing: ['text_projection.weight']
2025-01-18T10:33:54.738243 - Requested to load FluxClipModel_
2025-01-18T10:38:02.149395 - loaded completely 9.5367431640625e+25 4778.66259765625 True
2025-01-18T10:38:15.324053 - Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CUDAExecutionProvider': {'device_id': '0', 'has_user_compute_stream': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'user_compute_stream': '0', 'gpu_external_alloc': '0', 'gpu_mem_limit': '18446744073709551615', 'enable_cuda_graph': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'do_copy_in_default_stream': '1', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'tunable_op_tuning_enable': '0', 'tunable_op_max_tuning_duration_ms': '0', 'enable_skip_layer_norm_strict_mode': '0', 'prefer_nhwc': '0', 'use_ep_level_unified_stream': '0', 'use_tf32': '1', 'sdpa_kernel': '0'}, 'CPUExecutionProvider': {}}2025-01-18T10:38:15.326249 -
2025-01-18T10:38:19.240152 - find model:2025-01-18T10:38:19.240152 - 2025-01-18T10:38:19.240152 - D:\IAs\pinokio\api\comfy.git\app\models\insightface\models\antelopev2\1k3d68.onnx2025-01-18T10:38:19.240152 - 2025-01-18T10:38:19.241153 - landmark_3d_682025-01-18T10:38:19.241153 - 2025-01-18T10:38:19.241153 - ['None', 3, 192, 192]2025-01-18T10:38:19.241153 - 2025-01-18T10:38:19.241153 - 0.02025-01-18T10:38:19.241153 - 2025-01-18T10:38:19.241153 - 1.02025-01-18T10:38:19.242154 -
2025-01-18T10:38:19.457746 - Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CUDAExecutionProvider': {'device_id': '0', 'has_user_compute_stream': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'user_compute_stream': '0', 'gpu_external_alloc': '0', 'gpu_mem_limit': '18446744073709551615', 'enable_cuda_graph': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'do_copy_in_default_stream': '1', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'tunable_op_tuning_enable': '0', 'tunable_op_max_tuning_duration_ms': '0', 'enable_skip_layer_norm_strict_mode': '0', 'prefer_nhwc': '0', 'use_ep_level_unified_stream': '0', 'use_tf32': '1', 'sdpa_kernel': '0'}, 'CPUExecutionProvider': {}}2025-01-18T10:38:19.458746 -
2025-01-18T10:38:19.474749 - find model:2025-01-18T10:38:19.474749 - 2025-01-18T10:38:19.474749 - D:\IAs\pinokio\api\comfy.git\app\models\insightface\models\antelopev2\2d106det.onnx2025-01-18T10:38:19.474749 - 2025-01-18T10:38:19.474749 - landmark_2d_1062025-01-18T10:38:19.476114 - 2025-01-18T10:38:19.476114 - ['None', 3, 192, 192]2025-01-18T10:38:19.476114 - 2025-01-18T10:38:19.476114 - 0.02025-01-18T10:38:19.476746 - 2025-01-18T10:38:19.476746 - 1.02025-01-18T10:38:19.476746 -
2025-01-18T10:38:19.750649 - Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CUDAExecutionProvider': {'device_id': '0', 'has_user_compute_stream': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'user_compute_stream': '0', 'gpu_external_alloc': '0', 'gpu_mem_limit': '18446744073709551615', 'enable_cuda_graph': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'do_copy_in_default_stream': '1', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'tunable_op_tuning_enable': '0', 'tunable_op_max_tuning_duration_ms': '0', 'enable_skip_layer_norm_strict_mode': '0', 'prefer_nhwc': '0', 'use_ep_level_unified_stream': '0', 'use_tf32': '1', 'sdpa_kernel': '0'}, 'CPUExecutionProvider': {}}2025-01-18T10:38:19.751660 -
2025-01-18T10:38:19.759887 - find model:2025-01-18T10:38:19.759887 - 2025-01-18T10:38:19.759887 - D:\IAs\pinokio\api\comfy.git\app\models\insightface\models\antelopev2\genderage.onnx2025-01-18T10:38:19.759887 - 2025-01-18T10:38:19.759887 - genderage2025-01-18T10:38:19.759887 - 2025-01-18T10:38:19.760897 - ['None', 3, 96, 96]2025-01-18T10:38:19.760897 - 2025-01-18T10:38:19.760897 - 0.02025-01-18T10:38:19.760897 - 2025-01-18T10:38:19.760897 - 1.02025-01-18T10:38:19.760897 -
2025-01-18T10:38:24.583608 - Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CUDAExecutionProvider': {'device_id': '0', 'has_user_compute_stream': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'user_compute_stream': '0', 'gpu_external_alloc': '0', 'gpu_mem_limit': '18446744073709551615', 'enable_cuda_graph': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'do_copy_in_default_stream': '1', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'tunable_op_tuning_enable': '0', 'tunable_op_max_tuning_duration_ms': '0', 'enable_skip_layer_norm_strict_mode': '0', 'prefer_nhwc': '0', 'use_ep_level_unified_stream': '0', 'use_tf32': '1', 'sdpa_kernel': '0'}, 'CPUExecutionProvider': {}}2025-01-18T10:38:24.585603 -
2025-01-18T10:38:29.542434 - find model:2025-01-18T10:38:29.542434 - 2025-01-18T10:38:29.542434 - D:\IAs\pinokio\api\comfy.git\app\models\insightface\models\antelopev2\glintr100.onnx2025-01-18T10:38:29.542434 - 2025-01-18T10:38:29.542434 - recognition2025-01-18T10:38:29.543435 - 2025-01-18T10:38:29.543435 - ['None', 3, 112, 112]2025-01-18T10:38:29.543435 - 2025-01-18T10:38:29.543435 - 127.52025-01-18T10:38:29.543435 - 2025-01-18T10:38:29.543435 - 127.52025-01-18T10:38:29.543435 -
2025-01-18T10:38:30.209028 - Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CUDAExecutionProvider': {'device_id': '0', 'has_user_compute_stream': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'user_compute_stream': '0', 'gpu_external_alloc': '0', 'gpu_mem_limit': '18446744073709551615', 'enable_cuda_graph': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'do_copy_in_default_stream': '1', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'tunable_op_tuning_enable': '0', 'tunable_op_max_tuning_duration_ms': '0', 'enable_skip_layer_norm_strict_mode': '0', 'prefer_nhwc': '0', 'use_ep_level_unified_stream': '0', 'use_tf32': '1', 'sdpa_kernel': '0'}, 'CPUExecutionProvider': {}}2025-01-18T10:38:30.209028 -
2025-01-18T10:38:30.216036 - find model:2025-01-18T10:38:30.216036 - 2025-01-18T10:38:30.216036 - D:\IAs\pinokio\api\comfy.git\app\models\insightface\models\antelopev2\scrfd_10g_bnkps.onnx2025-01-18T10:38:30.216036 - 2025-01-18T10:38:30.216036 - detection2025-01-18T10:38:30.216036 - 2025-01-18T10:38:30.216036 - [1, 3, '?', '?']2025-01-18T10:38:30.216036 - 2025-01-18T10:38:30.216036 - 127.52025-01-18T10:38:30.217036 - 2025-01-18T10:38:30.217036 - 128.02025-01-18T10:38:30.217036 -
2025-01-18T10:38:30.218058 - set det-size:2025-01-18T10:38:30.219037 - 2025-01-18T10:38:30.219037 - (640, 640)2025-01-18T10:38:30.219037 -
2025-01-18T10:38:30.234558 - Loaded EVA02-CLIP-L-14-336 model config.
2025-01-18T10:38:31.893970 - Shape of rope freq: torch.Size([576, 64])
2025-01-18T10:38:49.616667 - Loading pretrained EVA02-CLIP-L-14-336 weights (eva_clip).
2025-01-18T10:39:16.738118 - incompatible_keys.missing_keys: ['visual.rope.freqs_cos', 'visual.rope.freqs_sin', 'visual.blocks.0.attn.rope.freqs_cos', 'visual.blocks.0.attn.rope.freqs_sin', 'visual.blocks.1.attn.rope.freqs_cos', 'visual.blocks.1.attn.rope.freqs_sin', 'visual.blocks.2.attn.rope.freqs_cos', 'visual.blocks.2.attn.rope.freqs_sin', 'visual.blocks.3.attn.rope.freqs_cos', 'visual.blocks.3.attn.rope.freqs_sin', 'visual.blocks.4.attn.rope.freqs_cos', 'visual.blocks.4.attn.rope.freqs_sin', 'visual.blocks.5.attn.rope.freqs_cos', 'visual.blocks.5.attn.rope.freqs_sin', 'visual.blocks.6.attn.rope.freqs_cos', 'visual.blocks.6.attn.rope.freqs_sin', 'visual.blocks.7.attn.rope.freqs_cos', 'visual.blocks.7.attn.rope.freqs_sin', 'visual.blocks.8.attn.rope.freqs_cos', 'visual.blocks.8.attn.rope.freqs_sin', 'visual.blocks.9.attn.rope.freqs_cos', 'visual.blocks.9.attn.rope.freqs_sin', 'visual.blocks.10.attn.rope.freqs_cos', 'visual.blocks.10.attn.rope.freqs_sin', 'visual.blocks.11.attn.rope.freqs_cos', 'visual.blocks.11.attn.rope.freqs_sin', 'visual.blocks.12.attn.rope.freqs_cos', 'visual.blocks.12.attn.rope.freqs_sin', 'visual.blocks.13.attn.rope.freqs_cos', 'visual.blocks.13.attn.rope.freqs_sin', 'visual.blocks.14.attn.rope.freqs_cos', 'visual.blocks.14.attn.rope.freqs_sin', 'visual.blocks.15.attn.rope.freqs_cos', 'visual.blocks.15.attn.rope.freqs_sin', 'visual.blocks.16.attn.rope.freqs_cos', 'visual.blocks.16.attn.rope.freqs_sin', 'visual.blocks.17.attn.rope.freqs_cos', 'visual.blocks.17.attn.rope.freqs_sin', 'visual.blocks.18.attn.rope.freqs_cos', 'visual.blocks.18.attn.rope.freqs_sin', 'visual.blocks.19.attn.rope.freqs_cos', 'visual.blocks.19.attn.rope.freqs_sin', 'visual.blocks.20.attn.rope.freqs_cos', 'visual.blocks.20.attn.rope.freqs_sin', 'visual.blocks.21.attn.rope.freqs_cos', 'visual.blocks.21.attn.rope.freqs_sin', 'visual.blocks.22.attn.rope.freqs_cos', 'visual.blocks.22.attn.rope.freqs_sin', 'visual.blocks.23.attn.rope.freqs_cos', 'visual.blocks.23.attn.rope.freqs_sin']
2025-01-18T10:39:23.265864 - Loading PuLID-Flux model.
2025-01-18T10:40:19.453041 - D:\IAs\pinokio\api\comfy.git\app\env\lib\site-packages\torchvision\models\_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
warnings.warn(
2025-01-18T10:40:19.456039 - D:\IAs\pinokio\api\comfy.git\app\env\lib\site-packages\torchvision\models\_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=None`.
warnings.warn(msg)
2025-01-18T10:40:55.339221 - Requested to load Flux
2025-01-18T10:41:20.831897 - loaded partially 6870.410672851563 6870.314453125 0
2025-01-18T10:41:21.206359 -
0%| | 0/20 [00:00<?, ?it/s]2025-01-18T10:41:25.061227 -
0%| | 0/20 [00:03<?, ?it/s]2025-01-18T10:41:25.061227 -
2025-01-18T10:41:25.730702 - !!! Exception during processing !!! The size of tensor a (3072) must match the size of tensor b (4096) at non-singleton dimension 2
2025-01-18T10:41:26.063935 - Traceback (most recent call last):
File "D:\IAs\pinokio\api\comfy.git\app\execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "D:\IAs\pinokio\api\comfy.git\app\execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "D:\IAs\pinokio\api\comfy.git\app\execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "D:\IAs\pinokio\api\comfy.git\app\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "D:\IAs\pinokio\api\comfy.git\app\comfy_extras\nodes_custom_sampler.py", line 633, in sample
samples = guider.sample(noise.generate_noise(latent), latent_image, sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise.seed)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\samplers.py", line 985, in sample
output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\patcher_extension.py", line 111, in execute
return self.wrappers[self.idx](self, *args, **kwargs)
File "D:\IAs\pinokio\api\comfy.git\app\custom_nodes\comfyui_pulid_flux_ll\pulidflux.py", line 380, in pulid_outer_sample_wrappers_with_override
out = wrapper_executor(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\patcher_extension.py", line 103, in __call__
return new_executor.execute(*args, **kwargs)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\patcher_extension.py", line 110, in execute
return self.original(*args, **kwargs)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\samplers.py", line 953, in outer_sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\samplers.py", line 936, in inner_sample
samples = executor.execute(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\patcher_extension.py", line 110, in execute
return self.original(*args, **kwargs)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\samplers.py", line 715, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
File "D:\IAs\pinokio\api\comfy.git\app\env\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\k_diffusion\sampling.py", line 162, in sample_euler
denoised = model(x, sigma_hat * s_in, **extra_args)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\samplers.py", line 380, in __call__
out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\samplers.py", line 916, in __call__
return self.predict_noise(*args, **kwargs)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\samplers.py", line 919, in predict_noise
return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\samplers.py", line 360, in sampling_function
out = calc_cond_batch(model, conds, x, timestep, model_options)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\samplers.py", line 196, in calc_cond_batch
return executor.execute(model, conds, x_in, timestep, model_options)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\patcher_extension.py", line 110, in execute
return self.original(*args, **kwargs)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\samplers.py", line 309, in _calc_cond_batch
output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\model_base.py", line 131, in apply_model
return comfy.patcher_extension.WrapperExecutor.new_class_executor(
File "D:\IAs\pinokio\api\comfy.git\app\comfy\patcher_extension.py", line 111, in execute
return self.wrappers[self.idx](self, *args, **kwargs)
File "D:\IAs\pinokio\api\comfy.git\app\custom_nodes\comfyui_pulid_flux_ll\pulidflux.py", line 404, in pulid_apply_model_wrappers
out = wrapper_executor(x, t, c_concat, c_crossattn, control, transformer_options, **kwargs)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\patcher_extension.py", line 103, in __call__
return new_executor.execute(*args, **kwargs)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\patcher_extension.py", line 110, in execute
return self.original(*args, **kwargs)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\model_base.py", line 160, in _apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
File "D:\IAs\pinokio\api\comfy.git\app\env\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\IAs\pinokio\api\comfy.git\app\env\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "D:\IAs\pinokio\api\comfy.git\app\comfy\ldm\flux\model.py", line 204, in forward
out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control, transformer_options, attn_mask=kwargs.get("attention_mask", None))
File "D:\IAs\pinokio\api\comfy.git\app\custom_nodes\comfyui_pulid_flux_ll\PulidFluxHook.py", line 189, in pulid_forward_orig
out = blocks_replace[("double_block", i)]({"img": img,
File "D:\IAs\pinokio\api\comfy.git\app\custom_nodes\comfyui_pulid_flux_ll\PulidFluxHook.py", line 74, in __call__
img = img + callback(temp_img,
File "D:\IAs\pinokio\api\comfy.git\app\custom_nodes\comfyui_pulid_flux_ll\PulidFluxHook.py", line 47, in pulid_patch
pulid_img = pulid_img * mask
RuntimeError: The size of tensor a (3072) must match the size of tensor b (4096) at non-singleton dimension 2
2025-01-18T10:41:26.129011 - Prompt executed in 1381.19 seconds
```
## Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
```
{"last_node_id":79,"last_link_id":153,"nodes":[{"id":16,"type":"KSamplerSelect","pos":[1079.965087890625,115.17709350585938],"size":[315,58],"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"SAMPLER","type":"SAMPLER","links":[85],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"KSamplerSelect"},"widgets_values":["euler"]},{"id":63,"type":"SaveImage","pos":[1436.846435546875,677.3868408203125],"size":[415.305908203125,467.5404968261719],"flags":{},"order":29,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":130}],"outputs":[],"properties":{"Node name for S&R":"SaveImage"},"widgets_values":["ComfyUI"]},{"id":51,"type":"PulidFluxEvaClipLoader","pos":[735.524658203125,219.17886352539062],"size":[312.9228210449219,26],"flags":{"collapsed":true},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"EVA_CLIP","type":"EVA_CLIP","links":[123],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"PulidFluxEvaClipLoader"},"widgets_values":[]},{"id":47,"type":"BasicGuider","pos":[1085.741943359375,461.1787109375],"size":[241.79998779296875,46],"flags":{"collapsed":false},"order":26,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":122},{"name":"conditioning","type":"CONDITIONING","link":107}],"outputs":[{"name":"GUIDER","type":"GUIDER","links":[83],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"BasicGuider"},"widgets_values":[]},{"id":17,"type":"BasicScheduler","pos":[1079.965087890625,215.17713928222656],"size":[315,106],"flags":{"collapsed":false},"order":16,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":152,"slot_index":0}],"outputs":[{"name":"SIGMAS","type":"SIGMAS","links":[93],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"BasicScheduler"},"widgets_values":["simple",20,0.6]},{"id":68,"type":"ShowText|pysssss","pos":[31.18793296813965,696.1123657226562],"size":[315,76],"flags":{},"order":17,"mode":0,"inputs":[{"name":"text","type":"STRING","link":139,"widget":{"name":"text"}}],"outputs":[{"name":"STRING","type":"STRING","links":[138],"slot_index":0,"shape":6}],"properties":{"Node name for S&R":"ShowText|pysssss"},"widgets_values":[""," face<loc_378><loc_89><loc_530><loc_284>"]},{"id":6,"type":"CLIPTextEncode","pos":[36.18803787231445,822.1126098632812],"size":[314.8039855957031,57.20009231567383],"flags":{},"order":19,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":128},{"name":"text","type":"STRING","link":138,"widget":{"name":"text"}}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[41],"slot_index":0}],"title":"CLIP Text Encode (Positive Prompt)","properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":[""]},{"id":26,"type":"FluxGuidance","pos":[1079.965087890625,365.177001953125],"size":[317.4000244140625,58],"flags":{"collapsed":false},"order":21,"mode":0,"inputs":[{"name":"conditioning","type":"CONDITIONING","link":41}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[107],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"FluxGuidance"},"widgets_values":[3.5]},{"id":25,"type":"RandomNoise","pos":[-316.2863464355469,115.02350616455078],"size":[315,82],"flags":{},"order":2,"mode":0,"inputs":[],"outputs":[{"name":"NOISE","type":"NOISE","links":[84],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"RandomNoise"},"widgets_values":[236871626083669,"randomize"]},{"id":49,"type":"VAEDecode","pos":[1175.19677734375,827.83154296875],"size":[140,46],"flags":{"collapsed":false},"order":28,"mode":0,"inputs":[{"name":"samples","type":"LATENT","link":87},{"name":"vae","type":"VAE","link":88}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[130,142],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"VAEDecode"},"widgets_values":[]},{"id":64,"type":"VAEEncode","pos":[1117.327392578125,730.445556640625],"size":[210,46],"flags":{},"order":15,"mode":0,"inputs":[{"name":"pixels","type":"IMAGE","link":133},{"name":"vae","type":"VAE","link":134}],"outputs":[{"name":"LATENT","type":"LATENT","links":[132],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"VAEEncode"},"widgets_values":[]},{"id":65,"type":"SetLatentNoiseMask","pos":[1126.327392578125,957.445556640625],"size":[264.5999755859375,46],"flags":{},"order":23,"mode":0,"inputs":[{"name":"samples","type":"LATENT","link":132},{"name":"mask","type":"MASK","link":147}],"outputs":[{"name":"LATENT","type":"LATENT","links":[136],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"SetLatentNoiseMask"},"widgets_values":[]},{"id":71,"type":"PreviewImage","pos":[1891,630],"size":[650.0430908203125,593.514892578125],"flags":{},"order":31,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":143}],"outputs":[],"properties":{"Node name for S&R":"PreviewImage"},"widgets_values":[]},{"id":70,"type":"CR Simple Image Compare","pos":[1789,315],"size":[400,266],"flags":{},"order":30,"mode":0,"inputs":[{"name":"image1","type":"IMAGE","link":141,"shape":7},{"name":"image2","type":"IMAGE","link":142,"shape":7}],"outputs":[{"name":"image","type":"IMAGE","links":[143],"slot_index":0},{"name":"show_help","type":"STRING","links":null}],"properties":{"Node name for S&R":"CR Simple Image Compare"},"widgets_values":["Original","Result",100,"AlumniSansCollegiateOne-Regular.ttf",50,"normal",20]},{"id":69,"type":"DownloadAndLoadFlorence2Model","pos":[41.18807601928711,121.11165618896484],"size":[313.6364440917969,106.65802001953125],"flags":{},"order":3,"mode":0,"inputs":[{"name":"lora","type":"PEFTLORA","link":null,"shape":7}],"outputs":[{"name":"florence2_model","type":"FL2MODEL","links":[140],"slot_index":0}],"properties":{"Node name for S&R":"DownloadAndLoadFlorence2Model"},"widgets_values":["gokaygokay/Florence-2-Flux-Large","fp16","sdpa"]},{"id":67,"type":"Florence2Run","pos":[38.18803787231445,286.1123352050781],"size":[313.7927551269531,352],"flags":{},"order":14,"mode":0,"inputs":[{"name":"image","type":"IMAGE","link":137},{"name":"florence2_model","type":"FL2MODEL","link":140}],"outputs":[{"name":"image","type":"IMAGE","links":[148],"slot_index":0},{"name":"mask","type":"MASK","links":null},{"name":"caption","type":"STRING","links":[139]},{"name":"data","type":"JSON","links":[149],"slot_index":3}],"properties":{"Node name for S&R":"Florence2Run"},"widgets_values":["face","caption_to_phrase_grounding",true,false,1024,3,true,"",609406654141994,"randomize"]},{"id":75,"type":"Florence2toCoordinates","pos":[425.6705627441406,291.9539794921875],"size":[210,102],"flags":{},"order":18,"mode":0,"inputs":[{"name":"data","type":"JSON","link":149}],"outputs":[{"name":"center_coordinates","type":"STRING","links":[],"slot_index":0,"shape":3},{"name":"bboxes","type":"BBOX","links":[146],"slot_index":1,"shape":3}],"properties":{"Node name for S&R":"Florence2toCoordinates"},"widgets_values":["",false]},{"id":74,"type":"Sam2Segmentation","pos":[412.12835693359375,444.1212463378906],"size":[220,202],"flags":{},"order":20,"mode":0,"inputs":[{"name":"sam2_model","type":"SAM2MODEL","link":145},{"name":"image","type":"IMAGE","link":148},{"name":"bboxes","type":"BBOX","link":146,"shape":7},{"name":"coordinates_positive","type":"STRING","link":null,"widget":{"name":"coordinates_positive"}},{"name":"coordinates_positive","type":"STRING","link":null,"widget":{"name":"coordinates_negative"},"shape":7},{"name":"coordinates_negative","type":"STRING","link":null,"widget":{"name":"coordinates_negative"},"shape":7},{"name":"mask","type":"MASK","link":null,"shape":7}],"outputs":[{"name":"mask","type":"MASK","links":[144],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"Sam2Segmentation"},"widgets_values":[false,"","",false]},{"id":73,"type":"GrowMask","pos":[408.7563171386719,689.7699584960938],"size":[210,82],"flags":{},"order":22,"mode":0,"inputs":[{"name":"mask","type":"MASK","link":144}],"outputs":[{"name":"MASK","type":"MASK","links":[147,150,151],"slot_index":0}],"properties":{"Node name for S&R":"GrowMask"},"widgets_values":[10,true]},{"id":77,"type":"LayerMask: MaskPreview","pos":[-266.6590576171875,631.6217041015625],"size":[210,250],"flags":{},"order":24,"mode":0,"inputs":[{"name":"mask","type":"MASK","link":150}],"outputs":[],"properties":{"Node name for S&R":"LayerMask: MaskPreview"},"widgets_values":[],"color":"rgba(27, 80, 119, 0.7)"},{"id":72,"type":"Note","pos":[1784,68],"size":[423.53955078125,147.19842529296875],"flags":{},"order":4,"mode":0,"inputs":[],"outputs":[],"properties":{},"widgets_values":["Watch Flux PuLID installation Tutorial: https://youtu.be/xUduNl7-pE0\n\n\nNo Need to Apply Mask, The Florence2 will Do that Authomatically"],"color":"#432","bgcolor":"#653"},{"id":45,"type":"PulidFluxModelLoader","pos":[737.524658203125,269.1788330078125],"size":[304.9228210449219,58],"flags":{},"order":5,"mode":0,"inputs":[],"outputs":[{"name":"PULIDFLUX","type":"PULIDFLUX","links":[125],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"PulidFluxModelLoader"},"widgets_values":["pulid_flux_v0.9.0.safetensors"]},{"id":54,"type":"LoadImage","pos":[1434.6485595703125,111.04784393310547],"size":[316.0858459472656,468.89483642578125],"flags":{},"order":6,"mode":0,"inputs":[],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[126],"slot_index":0,"shape":3},{"name":"MASK","type":"MASK","links":null,"shape":3}],"properties":{"Node name for S&R":"LoadImage"},"widgets_values":["1 (2).png","image"]},{"id":66,"type":"LoadImage","pos":[731.8696899414062,657.385986328125],"size":[315,338],"flags":{},"order":7,"mode":0,"inputs":[],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[133,137,141],"slot_index":0,"shape":3},{"name":"MASK","type":"MASK","links":[],"slot_index":1,"shape":3}],"properties":{"Node name for S&R":"LoadImage"},"widgets_values":["clipspace/clipspace-mask-79599.80000001192.png [input]","image"]},{"id":10,"type":"VAELoader","pos":[-315.7478332519531,514.5606689453125],"size":[311.81634521484375,60.429901123046875],"flags":{},"order":8,"mode":0,"inputs":[],"outputs":[{"name":"VAE","type":"VAE","links":[88,134],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"VAELoader"},"widgets_values":["ae.sft"]},{"id":76,"type":"DownloadAndLoadSAM2Model","pos":[424.50225830078125,108.2440185546875],"size":[210,130],"flags":{},"order":9,"mode":0,"inputs":[],"outputs":[{"name":"sam2_model","type":"SAM2MODEL","links":[145],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"DownloadAndLoadSAM2Model"},"widgets_values":["sam2.1_hiera_large.safetensors","single_image","cuda","fp16"]},{"id":53,"type":"PulidFluxInsightFaceLoader","pos":[734.15673828125,113.76423645019531],"size":[311.86822509765625,59.9342155456543],"flags":{},"order":10,"mode":0,"inputs":[],"outputs":[{"name":"FACEANALYSIS","type":"FACEANALYSIS","links":[124],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"PulidFluxInsightFaceLoader"},"widgets_values":["CUDA"]},{"id":62,"type":"ApplyPulidFlux","pos":[733.524658203125,371.1788024902344],"size":[315,206],"flags":{},"order":25,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":153},{"name":"pulid_flux","type":"PULIDFLUX","link":125},{"name":"eva_clip","type":"EVA_CLIP","link":123},{"name":"face_analysis","type":"FACEANALYSIS","link":124},{"name":"image","type":"IMAGE","link":126},{"name":"attn_mask","type":"MASK","link":151,"shape":7}],"outputs":[{"name":"MODEL","type":"MODEL","links":[122],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"ApplyPulidFlux"},"widgets_values":[1,0,1]},{"id":48,"type":"SamplerCustomAdvanced","pos":[1099.251220703125,571.6403198242188],"size":[236.8000030517578,123.9162826538086],"flags":{"collapsed":false},"order":27,"mode":0,"inputs":[{"name":"noise","type":"NOISE","link":84},{"name":"guider","type":"GUIDER","link":83},{"name":"sampler","type":"SAMPLER","link":85},{"name":"sigmas","type":"SIGMAS","link":93},{"name":"latent_image","type":"LATENT","link":136}],"outputs":[{"name":"output","type":"LATENT","links":[87],"slot_index":0,"shape":3},{"name":"denoised_output","type":"LATENT","links":null,"shape":3}],"properties":{"Node name for S&R":"SamplerCustomAdvanced"},"widgets_values":[]},{"id":41,"type":"DualCLIPLoaderGGUF","pos":[-318.7477722167969,355.56085205078125],"size":[315,106],"flags":{},"order":11,"mode":0,"inputs":[],"outputs":[{"name":"CLIP","type":"CLIP","links":[128],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"DualCLIPLoaderGGUF"},"widgets_values":["t5xxl_fp16.safetensors","clip_l.safetensors","flux"]},{"id":31,"type":"UnetLoaderGGUF","pos":[-780.47314453125,361.57568359375],"size":[315,58],"flags":{},"order":12,"mode":0,"inputs":[],"outputs":[{"name":"MODEL","type":"MODEL","links":[],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"UnetLoaderGGUF"},"widgets_values":["fluxRealistic_ggufFluxRealistic.gguf"]},{"id":79,"type":"UNETLoader","pos":[-632.629150390625,217.40740966796875],"size":[370,82],"flags":{},"order":13,"mode":0,"inputs":[],"outputs":[{"name":"MODEL","type":"MODEL","links":[152,153],"slot_index":0,"shape":3,"label":"MODEL"}],"properties":{"Node name for S&R":"UNETLoader"},"widgets_values":["flux1-dev-fp8.safetensors","default"]}],"links":[[41,6,0,26,0,"CONDITIONING"],[83,47,0,48,1,"GUIDER"],[84,25,0,48,0,"NOISE"],[85,16,0,48,2,"SAMPLER"],[87,48,0,49,0,"LATENT"],[88,10,0,49,1,"VAE"],[93,17,0,48,3,"SIGMAS"],[107,26,0,47,1,"CONDITIONING"],[122,62,0,47,0,"MODEL"],[123,51,0,62,2,"EVA_CLIP"],[124,53,0,62,3,"FACEANALYSIS"],[125,45,0,62,1,"PULIDFLUX"],[126,54,0,62,4,"IMAGE"],[128,41,0,6,0,"CLIP"],[130,49,0,63,0,"IMAGE"],[132,64,0,65,0,"LATENT"],[133,66,0,64,0,"IMAGE"],[134,10,0,64,1,"VAE"],[136,65,0,48,4,"LATENT"],[137,66,0,67,0,"IMAGE"],[138,68,0,6,1,"STRING"],[139,67,2,68,0,"STRING"],[140,69,0,67,1,"FL2MODEL"],[141,66,0,70,0,"IMAGE"],[142,49,0,70,1,"IMAGE"],[143,70,0,71,0,"IMAGE"],[144,74,0,73,0,"MASK"],[145,76,0,74,0,"SAM2MODEL"],[146,75,1,74,2,"BBOX"],[147,73,0,65,1,"MASK"],[148,67,0,74,1,"IMAGE"],[149,67,3,75,0,"JSON"],[150,73,0,77,0,"MASK"],[151,73,0,62,5,"MASK"],[152,79,0,17,0,"MODEL"],[153,79,0,62,0,"MODEL"]],"groups":[{"id":1,"title":"Face Swap Masking","bounding":[726,599,333,422],"color":"#3f789e","font_size":24,"flags":{}},{"id":2,"title":"Flux Models","bounding":[-328.156494140625,29.776546478271484,337,875.391357421875],"color":"#3f789e","font_size":24,"flags":{}},{"id":3,"title":"Prompts","bounding":[21.508338928222656,35.72065734863281,678.6956787109375,864.4009399414062],"color":"#3f789e","font_size":24,"flags":{}},{"id":4,"title":"Input Image","bounding":[1425,37,338,548],"color":"#3f789e","font_size":24,"flags":{}},{"id":5,"title":"PuLID","bounding":[724,41,329,548],"color":"#3f789e","font_size":24,"flags":{}},{"id":6,"title":"Ksampler","bounding":[1069,41,345,980],"color":"#3f789e","font_size":24,"flags":{}},{"id":7,"title":"Output","bounding":[1427,604,437,552],"color":"#a1309b","font_size":24,"flags":{}}],"config":{},"extra":{"ds":{"scale":0.6934334949441393,"offset":[534.6135626686157,174.8211541040852]},"node_versions":{"comfy-core":"0.3.10","comfyui_pulid_flux_ll":"1.0.4","ComfyUI_NAI-mod":"bfe88489ff250a84bc25c210d84a58135f9a8a8f","ComfyUI_Comfyroll_CustomNodes":"d78b780ae43fcf8c6b7c6505e6ffb4584281ceca","comfyui-florence2":"1.0.3","ComfyUI-segment-anything-2":"059815ecc55b17ae9b47d15ed9b39b243d73b25f","comfyui_layerstyle":"1.0.90","ComfyUI-GGUF":"5875c52f59baca3a9372d68c43a3775e21846fe0"},"ue_links":[]},"version":0.4}
```
## Additional Context
(Please add any additional context or steps to reproduce the error here)
### Logs
```powershell
```
### Other
_No response_
|
closed
|
2025-01-18T13:47:18Z
|
2025-03-01T11:40:04Z
|
https://github.com/comfyanonymous/ComfyUI/issues/6511
|
[
"User Support",
"Custom Nodes Bug"
] |
Felix0C
| 1
|
BayesWitnesses/m2cgen
|
scikit-learn
| 127
|
add support for declarative languages
|
It seems that especially for functional languages it is very common to write `Score()` function and apply it to some data.
At present, assemblers make assumption about that target language is imperative.
|
closed
|
2019-12-13T00:59:58Z
|
2020-04-24T21:48:29Z
|
https://github.com/BayesWitnesses/m2cgen/issues/127
|
[] |
StrikerRUS
| 4
|
QuivrHQ/quivr
|
api
| 3,553
|
Implement first version of retrieval/generation evaluation
|
For a list of potential datasets for retrieval see [CORE-325](https://linear.app/getquivr/issue/CORE-325/retrieval-datasets) and [Notion](https://www.notion.so/getquivr/Retrieval-generation-evaluation-189312462d4b800eb9cedff39777e80c?pvs=4)
# Evaluation steps for CI/CD
1. Select a subset (1 to 20) .Each subset contains 135 Q&A, with 5 html documents for each question, so a total of 675 documents (in html format)
2. Retrieve reference dataset --> [CORE-357](https://linear.app/getquivr/issue/CORE-357/retrieval-generation-load-reference-dataset)
3. For each row, parse, chunk and embed the documents contained in `search_results.page_result` --> [CORE-348](https://linear.app/getquivr/issue/CORE-348/retrieval-generation-eval-parse-chunk-and-embed-dataset)
4. For each row, extract the question from the `query` field and run the chosen RAG workflow to obtain an answer --> [CORE-345](https://linear.app/getquivr/issue/CORE-345/retrieval-generation-eval-run-quivr-rag-on-dataset-questions)
5. Compute evaluation metrics comparing the ground truth answers and the answers produced in 4 --> [CORE-340](https://linear.app/getquivr/issue/CORE-340/retrieval-generation-eval-metrics)
6. Push the results to exp. tracker --> [CORE-351](https://linear.app/getquivr/issue/CORE-351/retrieval-generation-eval-push-results-to-exp-tracker)
7. Define thresholds for alerting --> [CORE-350](https://linear.app/getquivr/issue/CORE-350/retrieval-generation-eval-alerting-thresholds)
|
open
|
2025-01-22T11:11:58Z
|
2025-02-05T13:56:47Z
|
https://github.com/QuivrHQ/quivr/issues/3553
|
[
"enhancement"
] |
jacopo-chevallard
| 1
|
d2l-ai/d2l-en
|
pytorch
| 1,728
|
chapter 16 (recommender systems) needs translation to pytorch
|
chapter 16 (recommender systems) needs translation to pytorch
|
open
|
2021-04-19T23:59:57Z
|
2022-01-19T04:09:59Z
|
https://github.com/d2l-ai/d2l-en/issues/1728
|
[] |
murphyk
| 5
|
jina-ai/clip-as-service
|
pytorch
| 914
|
TensorRT support for openai/clip-vit-large-patch14-336
|
Is there a fundamental technical limitation on not being able to support TensorRT for openai/clip-vit-large-patch14-336 ? Just wanna understand why most 768-dim embedding models are not supported according to https://clip-as-service.jina.ai/user-guides/server/#model-support
|
open
|
2023-05-09T02:48:36Z
|
2023-05-09T03:38:49Z
|
https://github.com/jina-ai/clip-as-service/issues/914
|
[] |
junwang-wish
| 1
|
iperov/DeepFaceLab
|
machine-learning
| 545
|
Openfaceswap gui issue when training
|

It just gets stuck on this part then nothing happens no matter how long i leave it for of any one has any ideas it would be a good help thanks.
|
closed
|
2020-01-04T20:08:36Z
|
2020-03-28T05:42:17Z
|
https://github.com/iperov/DeepFaceLab/issues/545
|
[] |
badat1223
| 4
|
mwaskom/seaborn
|
data-science
| 3,462
|
seaborn/_oldcore.py generating Futurewarning for deprecated pandas `is_categorical_dtype` and `use_inf_as_na` usage
|
This is more informational than a true "issue" as everything appears to be working.
I installed Ultralytics 8.0.170 in a new `venv` and at the start of training, received numerous `FutureWarnings` when `trainer` started plotting labels. Screenshot shows sample of output (continues for several more lines).

## The features being warned about are:
- `pd.api.types.is_categorical_dtype`
- `pd.option_context('mode.use_inf_as_na')`
both appear to be from the `seaborn` library.
Originally referenced in https://github.com/ultralytics/ultralytics/issues/4729
|
closed
|
2023-09-05T10:03:41Z
|
2023-10-01T11:54:54Z
|
https://github.com/mwaskom/seaborn/issues/3462
|
[] |
glenn-jocher
| 4
|
ipython/ipython
|
data-science
| 13,959
|
%debug magic doesn't capture the locals
|
To reproduce:
1 - create a file `test.ipy`:
```python
def fun():
res = 1
%debug print(res)
fun()
```
2 - run with `ipython test.ipy`
3 - After pressing continue on the pdb promt, I get:
`NameError: name 'res' is not defined`
This is in contrast with the following code, which works:
```python
def fun():
res = 1
%timeit print(res)
fun()
```
The difference seems to be that the `timeit` magic is decorated with `@needs_local_scope`
|
closed
|
2023-03-04T06:17:35Z
|
2023-03-13T10:35:54Z
|
https://github.com/ipython/ipython/issues/13959
|
[] |
impact27
| 0
|
jofpin/trape
|
flask
| 255
|
Ngrok tunnels never seem to load.
|
The startup and execution of trape seem to be fine (apart from a warning about being a development server, could possibly be the problem, I don't know), but whenever I try to use my public ngrok link (or my local IP address for that matter), the page seems to just infinitely load until I close the tunnel, which then returns an Error 502 Bad Gateway.
What's going on?
|
open
|
2020-08-14T11:22:58Z
|
2020-08-14T11:22:58Z
|
https://github.com/jofpin/trape/issues/255
|
[] |
jethr0-1
| 0
|
zappa/Zappa
|
django
| 935
|
[Migrated] Zappa init error: Access is denied (windows)
|
Originally from: https://github.com/Miserlou/Zappa/issues/2203 by [merlinnaidoo](https://github.com/merlinnaidoo)
(venv) C:\Users\zappa>zappa init
Access is denied.
(cannot run this to run to get json)
(venv) C:\Users\zappa>zappa
Access is denied.
python -m zappa
C:\Users\merlinn\PycharmProjects\lambdazapptest\venv\Scripts\python.exe: No module named zappa.__main__; 'zappa' is a package and cannot be directly executed
(the above is understood but thought i should include to help if anything)
python version 3.8. windows 10 with full rights on project folder.
|
closed
|
2021-02-20T13:24:44Z
|
2022-07-16T04:47:22Z
|
https://github.com/zappa/Zappa/issues/935
|
[] |
jneves
| 1
|
jrieke/traingenerator
|
streamlit
| 7
|
New template: Image classification with Keras/Tensorflow
|
Opening up this issue to coordinate work on a new keras/tensorflow template – @jaymody and @EteimZ have shown interest in this. Guys, please feel free to share here if/how you want to work on this and coordinate :)
The template could probably take a lot of the code from the existing [pytorch template](https://github.com/jrieke/traingenerator/tree/main/templates/Image%20classification_PyTorch) and [sklearn template](https://github.com/jrieke/traingenerator/tree/main/templates/Image%20classification_scikit-learn) for image classification and just train tensorflow models. I guess keras will be the easiest (and most useful) option.
|
open
|
2021-01-03T01:55:31Z
|
2021-01-03T04:49:56Z
|
https://github.com/jrieke/traingenerator/issues/7
|
[
"new template"
] |
jrieke
| 1
|
omnilib/aiomultiprocess
|
asyncio
| 90
|
how multi processes can be used for each task?
|
### Description
I test the following two methods, method_1 can show multiprocessing is involved for task dispatching. While in method_2, all tasks are involved in a single process, can anyone show me a way out for involving multiprocesses in method_2? Thx in advanced.
### Details
```
async def get(url):
async with request("GET", url) as response:
result = await response.text("utf-8")
logger.info(len(result))
return result
async def method_1():
urls = ["https://jreese.sh", "https://noswap.com", "https://omnilib.dev", "https://jreese.sh", "https://noswap.com", "https://omnilib.dev", "https://jreese.sh", "https://noswap.com", "https://omnilib.dev", "https://jreese.sh", "https://noswap.com", "https://omnilib.dev"]
async with amp.Pool() as pool:
async for result in pool.map(get, urls):
logger.info(len(result))
async def method_2():
pool_tasks = []
calls_list = ["https://jreese.sh", "https://noswap.com", "https://omnilib.dev", "https://jreese.sh", "https://noswap.com", "https://omnilib.dev", "https://jreese.sh", "https://noswap.com", "https://omnilib.dev", "https://jreese.sh", "https://noswap.com", "https://omnilib.dev"]
async with amp.Pool() as pool:
for call in calls_list:
pool_tasks.append(pool.apply(get, args=[call]))
[await _ for _ in tqdm(asyncio.as_completed(pool_tasks), total=len(pool_tasks), ncols=90, desc="total", position=0, leave=True)]
```
* OS: Mac
* Python version: 3.8.2
* aiomultiprocess version: 0.9.0
* Can you repro on master?
* Can you repro in a clean virtualenv?
|
open
|
2021-03-22T00:48:17Z
|
2021-03-22T03:22:39Z
|
https://github.com/omnilib/aiomultiprocess/issues/90
|
[] |
doncat99
| 1
|
sanic-org/sanic
|
asyncio
| 2,443
|
app.add_signal() AttributeError: 'method' object has no attribute '__requirements__'
|
**Describe the bug**
```
AttributeError: 'method' object has no attribute '__requirements__'
```
```
File "server.py", line 56, in start
self.app.add_signal( self.stop , "server.shutdown.after" )
File "/venv/lib/python3.9/site-packages/sanic/mixins/signals.py", line 76, in add_signal
self.signal(event=event, condition=condition, exclusive=exclusive)(
File "/venv/lib/python3.9/site-packages/sanic/mixins/signals.py", line 57, in decorator
self._apply_signal(future_signal)
File "/venv/lib/python3.9/site-packages/sanic/app.py", line 422, in _apply_signal
return self.signal_router.add(*signal)
File "/venv/lib/python3.9/site-packages/sanic/signals.py", line 218, in add
handler.__requirements__ = condition # type: ignore
```
**Code snippet**
```
def stop( *args , **kwargs ):
print( "stop" )
app.add_signal( stop , "server.shutdown.after" )
```
**Expected behavior**
https://github.com/sanic-org/sanic/blob/main/tests/test_signals.py#L14=
|
closed
|
2022-04-30T21:47:14Z
|
2022-05-17T14:55:31Z
|
https://github.com/sanic-org/sanic/issues/2443
|
[
"information required"
] |
48723247842
| 3
|
flavors/django-graphql-jwt
|
graphql
| 327
|
Case sensitive Username
|
We are using Postgres backend which is case sensitive unlike MSSQL. That means middleware fails to fetch a user if the username is not passed exactly like stored in db.
Is there a way or setting to configure to disable case sensitivity?
|
open
|
2024-01-12T13:58:36Z
|
2024-01-12T13:58:36Z
|
https://github.com/flavors/django-graphql-jwt/issues/327
|
[] |
hirensoni913
| 0
|
JaidedAI/EasyOCR
|
deep-learning
| 502
|
Why don't you support refine model in text detection like CRAFT?
|
As I know, CRAFT supports refine model to merge box. It helps result more accuracy. Why didnot easyocr do it?
Thank you so much
|
closed
|
2021-07-30T15:42:47Z
|
2021-10-06T09:04:24Z
|
https://github.com/JaidedAI/EasyOCR/issues/502
|
[] |
vneseresearcher
| 1
|
mitmproxy/mitmproxy
|
python
| 7,279
|
How can I change mitmweb style?
|
1、How can I change "comment" style in mitmweb, such as put the value in the center or color it red?

2、May I add a new web column in mitmweb? So I can easily notice the error response.
|
closed
|
2024-10-27T10:27:08Z
|
2024-10-27T12:21:18Z
|
https://github.com/mitmproxy/mitmproxy/issues/7279
|
[
"kind/triage"
] |
Stephen0910
| 1
|
pytest-dev/pytest-html
|
pytest
| 193
|
[Feature Request] Need report object in pytest_html_results_summary hook to extend the summary section
|
It would be too good if we add "report" object to the pytest_html_results_summary hook, so that in runtime can use the objects in report like outcome, location,sections,for customization option. If we added it then it would be good to create the test summary table section consists of test_group_name,total tests in folder categorized by requirement,passed&failed&skipped in summary table section .If we implement this it would be good to provide overall coverage.
Otherwise we can provide hooks like html_table,row,column hooks already available
|
open
|
2019-01-12T16:17:33Z
|
2020-11-22T14:56:58Z
|
https://github.com/pytest-dev/pytest-html/issues/193
|
[
"enhancement",
"feature"
] |
kvenkat88
| 9
|
pydantic/logfire
|
fastapi
| 636
|
Logfire does not have exception info
|
### Description
If I configure my StructLog processors like this
```
structlog.configure(
processors=[
structlog.contextvars.merge_contextvars,
structlog.processors.add_log_level,
structlog.processors.TimeStamper(fmt="iso"),
logfire.StructlogProcessor(),
structlog.dev.ConsoleRenderer(
colors=True, exception_formatter=RichTracebackFormatter(max_frames=1, show_locals=False, width=80)
),
],
)
```
I don't see the exception info in LogFire:
I just see this
```
{
"connection_type": "http",
"logfire.disable_console_log": true,
"logfire.msg_template": "Uncaught exception",
"path": "/register/github"
}
```
But if I add the `structlog.processors.format_exc_info,`
```
structlog.configure(
processors=[
structlog.contextvars.merge_contextvars,
structlog.processors.add_log_level,
structlog.processors.TimeStamper(fmt="iso"),
structlog.processors.format_exc_info,
logfire.StructlogProcessor(),
structlog.dev.ConsoleRenderer(
colors=True, exception_formatter=RichTracebackFormatter(max_frames=1, show_locals=False, width=80)
),
],
)
```
I am able to get the exception
```
{
"connection_type": "http",
"exception": "...",
"logfire.disable_console_log": true,
"logfire.msg_template": "Uncaught exception",
"path": "/register/github"
}
```
But this causes the RichTracebackFormatter to not work and I don't have pretty exceptions in the terminal.
### Python, Logfire & OS Versions, related packages (not required)
```TOML
logfire="2.4.1"
platform="Linux-6.6.63_1-x86_64-with-glibc2.39"
python="3.12.7 (main, Oct 16 2024, 04:37:19) [Clang 18.1.8 ]"
[related_packages]
requests="2.32.3"
pydantic="2.9.2"
protobuf="5.28.3"
rich="13.9.4"
executing="2.1.0"
opentelemetry-api="1.28.2"
opentelemetry-distro="0.49b2"
opentelemetry-exporter-otlp-proto-common="1.28.1"
opentelemetry-exporter-otlp-proto-http="1.28.1"
opentelemetry-instrumentation="0.49b2"
opentelemetry-instrumentation-asgi="0.49b2"
opentelemetry-instrumentation-sqlalchemy="0.49b2"
opentelemetry-proto="1.28.1"
opentelemetry-sdk="1.28.2"
opentelemetry-semantic-conventions="0.49b2"
opentelemetry-util-http="0.49b2"
```
|
closed
|
2024-11-30T23:01:02Z
|
2024-12-04T12:30:40Z
|
https://github.com/pydantic/logfire/issues/636
|
[] |
vikigenius
| 1
|
QingdaoU/OnlineJudge
|
django
| 262
|
一些良好的发展建言
|
非常喜欢这个评测系统
有以下几点建议:
1.一键导出导入全部数据
2.题目选择OI制时像洛谷一样非常突出得分情况
3.题目加入标程按钮,出题人可以附加标程供无法AC的同学点击参考
4.加入题目讨论区和选手个人博客
5.选手可以自选是否加入代码公开计划,方便其他人学习AC的代码。
6.比赛支持Hack
7.题目导入后支持批量加标签和开启显示
8.部分题目导入时报错,希望修复兼容
9.后台管理可以设置未登录的游客访问时看到中文还是英文界面
10.支持后台更换主题,专业用户可以编写主题上传社区
11.支持打卡签到,Noip/noi/acm赛事倒计时显示等
12.Github上挂好二维码,方便大家打赏支持这个开源项目(如果已经挂了可能我没看到,希望往显眼的地方挂一下)
13.加入在线ide,或者在题目提交那得代码框加入调试模式,方便在某些不方便装编译器的环境下在线调试代码。
14.对利用FILE IO写入大量无效数据或提交恶意代码卡系统的行为进行检测或做限制措施。
15.加入用户ip封禁功能
16.用户可以编写详细的自我介绍供展示
17.后台支持热更新OJ到最新版本。
|
open
|
2019-09-09T16:36:41Z
|
2019-09-22T04:57:26Z
|
https://github.com/QingdaoU/OnlineJudge/issues/262
|
[] |
CrazyBoyM
| 19
|
jumpserver/jumpserver
|
django
| 15,098
|
[Question] Custom applet
|
### Product Version
4
### Product Edition
- [x] Community Edition
- [ ] Enterprise Edition
- [ ] Enterprise Trial Edition
### Installation Method
- [ ] Online Installation (One-click command installation)
- [x] Offline Package Installation
- [ ] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] Source Code
### Environment Information
create a custom build applet
### 🤔 Question Description
hi everyone, i was messing around with jump features when i wanted to test out the Custome build Applet, i was getting this error. I checked the DOC but there was not enough information regarding it, but for an example i found the codes for Winbox in one of the previous issues(https://github.com/jumpserver/jumpserver/issues/13848), BUT the results didn't change. can you help me with that. i will upload both my .zip file and the screenshot of the tinker agent running in my windows server. thank you
[winbox.zip](https://github.com/user-attachments/files/19402004/winbox.zip)
### Expected Behavior
_No response_
### Additional Information
_No response_
|
open
|
2025-03-22T08:39:37Z
|
2025-03-24T03:29:54Z
|
https://github.com/jumpserver/jumpserver/issues/15098
|
[
"⏳ Pending feedback",
"🤔 Question"
] |
hellsangell
| 1
|
plotly/dash
|
data-visualization
| 2,606
|
[BUG] Duplicate callback outputs error when background callbacks or long_callbacks share a cancel input
|
**Describe your context**
Currently I am trying to migrate a Dash application from using versions 2.5.1 to a newer one, and as suggested for Dash >2.5.1 I am moving from using ```long_callbacks``` to using the ```dash.callback``` with ```background=True```, along with moving to using ```background_callback_manager```.
Environment
```
dash >2.5.1
```
**Describe the bug**
Updating to Dash 2.6.0+ causes previously working ```long_callbacks``` to cause Dash to give a duplicate callback output error, even though there are no duplicate outputs, the error is still present even after switching to the ```dash.callback``` with ```background=True```, and moving to using ```background_callback_manager```. This only appears when multiple background (or long_callbacks) have a shared cancel input.
Here is a [link](https://github.com/C-C-Shen/dash_background_callback_test) to a test repo that shows this, with a Dash 2.5.1 version that works and a Dash 2.6.1 version that does not.
Here is a code snippet from the example repo:
```
@callback(
output=Output("paragraph_id_1", "children"),
inputs=Input("button_id_1", "n_clicks"),
prevent_initial_call=True,
background=True,
running=[
(Output("button_id_1", "disabled"), True, False),
],
cancel=[Input("cancel_button_id", "n_clicks")],
)
def update_clicks(n_clicks):
time.sleep(2.0)
return [f"Clicked Button 1: {n_clicks} times"]
@callback(
output=Output("paragraph_id_2", "children"),
inputs=Input("button_id_2", "n_clicks"),
prevent_initial_call=True,
background=True,
running=[
(Output("button_id_2", "disabled"), True, False),
],
cancel=[Input("cancel_button_id", "n_clicks")],
)
def update_clicks(n_clicks):
time.sleep(2.0)
return [f"Clicked Button 2: {n_clicks} times"]
```
**Expected behavior**
Sharing the same cancel parameter between multiple callbacks should work like it does in Dash 2.5.1. Moving to Dash 2.6.0+ should probably not be causing a duplicate callback output error when no outputs are duplicated.
- if this is expected then it should be mentioned in the changelog
**Screenshots**
This an example error that is associated with the ```new.py``` in the example repo linked above

|
closed
|
2023-07-31T19:02:22Z
|
2023-08-01T11:57:05Z
|
https://github.com/plotly/dash/issues/2606
|
[] |
C-C-Shen
| 1
|
zappa/Zappa
|
flask
| 729
|
[Migrated] Timeout without a "Zappa event"
|
Originally from: https://github.com/Miserlou/Zappa/issues/1843 by [nihaals](https://github.com/nihaals)
## Context
When I send a request to my Zappa page I get an `Internal server error`, but my logs simply say:
```
[1553875874136] Instancing..
[1553875879150] 2019-03-29T16:11:19.138Z b3b3a6b7-6ee5-4ea6-a5f3-f811264d1ddd Task timed out after 5.00 seconds
```
## Your Environment
* Zappa version used: 0.48.2
* Operating System and Python version: Docker image `python:3.6` (Python 3.6, I believe stretch)
* The output of `pip freeze`: My requirements file:
```
django==2.1.7
django-crispy-forms==1.7.2
django-recaptcha==2.0.2
Pillow==5.4.1
django-cleanup==3.1.0
dj-database-url==0.5.0
psycopg2-binary==2.7.7
sentry-sdk==0.7.4
libsass==0.17.0
django-compressor==2.2
django-sass-processor==0.7.2
requests==2.21.0
django-ipware==2.1.0
django-maintenance-mode==0.13.0
django-constance[database]==2.3.1
django-grappelli==2.12.2
django-storages==1.7.1
collectfast==0.6.2
```
* Your `zappa_settings.json`:
```json
{
"staging": {
"django_settings": "orangetools.settings",
"profile_name": null,
"project_name": "orange-tools",
"runtime": "python3.6",
"s3_bucket": "zappa-pe5li99w9",
"aws_region": "eu-west-2",
"exclude": [
],
"memory_size": 256,
"timeout_seconds": 5,
"domain": "staging.orangetools.xyz",
"keep_warm": false
}
}
```
This used to work fine but for the last day, I have been having this issue and have tried to redeploy it but still having issues.
|
closed
|
2021-02-20T12:41:20Z
|
2024-04-13T18:14:44Z
|
https://github.com/zappa/Zappa/issues/729
|
[
"no-activity",
"auto-closed"
] |
jneves
| 2
|
gradio-app/gradio
|
data-science
| 10,863
|
Support for a favicon
|
- [ X] I have searched to see if a similar issue already exists.
**Is your feature request related to a problem? Please describe.**
not a problem, just an improvement.
**Describe the solution you'd like**
eg ```demo.launch(favicon="/path/to/favicon.ico')```
**Additional context**
Add any other context or screenshots about the feature request here.
|
closed
|
2025-03-22T16:42:06Z
|
2025-03-23T21:31:12Z
|
https://github.com/gradio-app/gradio/issues/10863
|
[] |
rbpasker
| 1
|
hankcs/HanLP
|
nlp
| 833
|
word2vec 训练工具参数文档描述不准确
|
<!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [x] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:1.6.3
我使用的版本是:1.6.3
<!--以上属于必填项,以下可自由发挥-->
## 我的问题
项目中提供的word2vec 训练工具,文档中关于-hs 和-cbow 的参数实际上必须要有数值参数(1或其它),否则运行时会报异常:
Exception in thread "main" java.lang.IllegalArgumentException: Argument missing for -cbow
wiki 中关于训练工具的参数Examples上也有同样的问题。
看了一下代码AbstractTrainer.java 中的setConfig函数中解析参数,-cbow 后面必须跟上1才会使用 cbow 模型。
if ((i = argPos("-cbow", args)) >= 0) config.setUseContinuousBagOfWords(Integer.parseInt(args[i + 1]) == 1);
## 复现问题
java -cp hanlp-1.6.3.jar com.hankcs.hanlp.mining.word2vec.Train -input input.txt -output output.txt -cbow
### 期望输出
开始训练
### 实际输出
<!-- HanLP实际输出了什么?产生了什么效果?错在哪里?-->
```
Exception in thread "main" java.lang.IllegalArgumentException: Argument missing for -cbow
at com.hankcs.hanlp.mining.word2vec.AbstractTrainer.argPos(AbstractTrainer.java:50)
at com.hankcs.hanlp.mining.word2vec.AbstractTrainer.argPos(AbstractTrainer.java:40)
at com.hankcs.hanlp.mining.word2vec.AbstractTrainer.setConfig(AbstractTrainer.java:62)
at com.hankcs.hanlp.mining.word2vec.Train.execute(Train.java:24)
at com.hankcs.hanlp.mining.word2vec.Train.main(Train.java:38)
```
## 其他信息
<!-- 任何可能有用的信息,包括截图、日志、配置文件、相关issue等等。-->
|
closed
|
2018-05-18T05:59:11Z
|
2020-01-01T10:50:15Z
|
https://github.com/hankcs/HanLP/issues/833
|
[
"ignored"
] |
linuxsong
| 2
|
autogluon/autogluon
|
computer-vision
| 4,595
|
Bump transformers and accelerate versions
|
## Description
- The incoming Chronos version depends on `accelerate` version 0.32 and runs stable with version 0.34, which is the last version of `accelerate` before its v1.0 release. In AG, accelerate is capped at v0.22.
- Transformers dependency is also now 5 minor versions behind. Chronos runs stable with v4.39. The latest version is v4.46. AG cap is at 4.41.
This is a tracking issue to bump these dependencies to
```
"transformers[sentencepiece]": ">=4.39.0,<4.47",
"accelerate": ">=0.32,<1",
```
## References
|
closed
|
2024-10-29T16:50:20Z
|
2024-11-14T19:03:17Z
|
https://github.com/autogluon/autogluon/issues/4595
|
[
"enhancement",
"module: timeseries",
"module: multimodal",
"dependency",
"priority: 0"
] |
canerturkmen
| 1
|
python-arq/arq
|
asyncio
| 29
|
add trove classifier
|
see https://github.com/aio-libs/aiohttp-devtools/pull/68
|
closed
|
2017-04-22T11:10:53Z
|
2017-05-06T10:35:31Z
|
https://github.com/python-arq/arq/issues/29
|
[] |
samuelcolvin
| 0
|
xonsh/xonsh
|
data-science
| 5,312
|
semicolon + newline + space in subprocess mode results in SyntaxError
|
## xonfig
<details>
```
+------------------+-----------------+
| xonsh | 0.14.0 |
| Python | 3.10.12 |
| PLY | 3.11 |
| have readline | True |
| prompt toolkit | 3.0.38 |
| shell type | prompt_toolkit |
| history backend | json |
| pygments | 2.15.1 |
| on posix | True |
| on linux | True |
| distro | ubuntu |
| on wsl | False |
| on darwin | False |
| on windows | False |
| on cygwin | False |
| on msys2 | False |
| is superuser | False |
| default encoding | utf-8 |
| xonsh encoding | utf-8 |
| encoding errors | surrogateescape |
| xontrib | [] |
| RC file | [] |
+------------------+-----------------+
```
</details>
## Expected Behavior
Running from bash:
```
$ xonsh -c $'echo foo;\n echo bar'
```
Should result in:
```
foo
bar
```
i.e. whitespace like that should be ignored around a semicolon
## Current Behavior
Result is a `SyntaxError`:
```xsh
XONSH_DEBUG=1 xonsh -c $'echo foo;\n echo bar'
<string>:1:5:9 - echo foo;
<string>:1:5:9 + ![echo foo];
<string>:2:0 - echo bar
<string>:2:0 + ![echo bar]
<string>:1:5 - echo foo;
<string>:1:5 + ![echo foo;]
<string>:2:0 - echo bar
<string>:2:0 + ![echo bar]
xonsh: For full traceback set: $XONSH_SHOW_TRACEBACK = True
File "<string>", line 2
echo bar
SyntaxError: ('code: ',)
```
I would guess this is because what ends up getting executed is:
```xsh
![echo foo]
![echo bar]
```
And the indent on the second line is invalid python syntax?
I would say this is probably a pretty uncommon edge case, but I encountered it while trying to install the neovim plugin [neovim-treesitter](https://github.com/nvim-treesitter/nvim-treesitter) which runs shell commands from lua, and [one of them follows the above pattern](https://github.com/nvim-treesitter/nvim-treesitter/blob/5e4f959d5979730ddb2ee9ae60f5133081502b23/lua/nvim-treesitter/shell_command_selectors.lua#L332) with the newline and space. (I might open a PR on that repo to remove the space, but this still seems like a potential bug with xonsh anyway.) FWIW I already have a workaround for that, I can force vim to use bash as its shell for command execution.
### Traceback (if applicable)
<details>
```xsh
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/xonsh/execer.py", line 343, in _parse_ctx_free
return _try_parse(input, greedy=False)
File "/usr/local/lib/python3.10/dist-packages/xonsh/execer.py", line 253, in _try_parse
raise original_error from None
File "/usr/local/lib/python3.10/dist-packages/xonsh/execer.py", line 234, in _try_parse
tree = self.parser.parse(
File "/usr/local/lib/python3.10/dist-packages/xonsh/parsers/base.py", line 507, in parse
tree = self.parser.parse(input=s, lexer=self.lexer, debug=debug_level)
File "/usr/local/lib/python3.10/dist-packages/xonsh/ply/ply/yacc.py", line 335, in parse
return self.parseopt_notrack(input, lexer, debug, tracking, tokenfunc)
File "/usr/local/lib/python3.10/dist-packages/xonsh/ply/ply/yacc.py", line 1203, in parseopt_notrack
tok = call_errorfunc(self.errorfunc, errtoken, self)
File "/usr/local/lib/python3.10/dist-packages/xonsh/ply/ply/yacc.py", line 194, in call_errorfunc
r = errorfunc(token)
File "/usr/local/lib/python3.10/dist-packages/xonsh/parsers/base.py", line 3600, in p_error
self._parse_error(msg, self.currloc(lineno=p.lineno, column=p.lexpos))
File "/usr/local/lib/python3.10/dist-packages/xonsh/parsers/base.py", line 634, in _parse_error
raise_parse_error(msg, loc, self._source, self.lines)
File "/usr/local/lib/python3.10/dist-packages/xonsh/parsers/base.py", line 220, in raise_parse_error
raise err
File "<string>", line 2
echo bar
SyntaxError: ('code: ',)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/xonsh/execer.py", line 234, in _try_parse
tree = self.parser.parse(
File "/usr/local/lib/python3.10/dist-packages/xonsh/parsers/base.py", line 507, in parse
tree = self.parser.parse(input=s, lexer=self.lexer, debug=debug_level)
File "/usr/local/lib/python3.10/dist-packages/xonsh/ply/ply/yacc.py", line 335, in parse
return self.parseopt_notrack(input, lexer, debug, tracking, tokenfunc)
File "/usr/local/lib/python3.10/dist-packages/xonsh/ply/ply/yacc.py", line 1203, in parseopt_notrack
tok = call_errorfunc(self.errorfunc, errtoken, self)
File "/usr/local/lib/python3.10/dist-packages/xonsh/ply/ply/yacc.py", line 194, in call_errorfunc
r = errorfunc(token)
File "/usr/local/lib/python3.10/dist-packages/xonsh/parsers/base.py", line 3600, in p_error
self._parse_error(msg, self.currloc(lineno=p.lineno, column=p.lexpos))
File "/usr/local/lib/python3.10/dist-packages/xonsh/parsers/base.py", line 634, in _parse_error
raise_parse_error(msg, loc, self._source, self.lines)
File "/usr/local/lib/python3.10/dist-packages/xonsh/parsers/base.py", line 220, in raise_parse_error
raise err
File "<string>", line 2
![echo bar]
SyntaxError: ('code: ',)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/xonsh/main.py", line 522, in main_xonsh
exc_info = run_code_with_cache(
File "/usr/local/lib/python3.10/dist-packages/xonsh/codecache.py", line 215, in run_code_with_cache
ccode = compile_code(display_filename, code, execer, glb, loc, mode)
File "/usr/local/lib/python3.10/dist-packages/xonsh/codecache.py", line 125, in compile_code
ccode = execer.compile(code, glbs=glb, locs=loc, mode=mode, filename=filename)
File "/usr/local/lib/python3.10/dist-packages/xonsh/execer.py", line 130, in compile
tree = self.parse(input, ctx, mode=mode, filename=filename, transform=transform)
File "/usr/local/lib/python3.10/dist-packages/xonsh/execer.py", line 86, in parse
tree, input = self._parse_ctx_free(input, mode=mode, filename=filename)
File "/usr/local/lib/python3.10/dist-packages/xonsh/execer.py", line 345, in _parse_ctx_free
return _try_parse(input, greedy=True)
File "/usr/local/lib/python3.10/dist-packages/xonsh/execer.py", line 330, in _try_parse
raise original_error
File "/usr/local/lib/python3.10/dist-packages/xonsh/execer.py", line 234, in _try_parse
tree = self.parser.parse(
File "/usr/local/lib/python3.10/dist-packages/xonsh/parsers/base.py", line 507, in parse
tree = self.parser.parse(input=s, lexer=self.lexer, debug=debug_level)
File "/usr/local/lib/python3.10/dist-packages/xonsh/ply/ply/yacc.py", line 335, in parse
return self.parseopt_notrack(input, lexer, debug, tracking, tokenfunc)
File "/usr/local/lib/python3.10/dist-packages/xonsh/ply/ply/yacc.py", line 1203, in parseopt_notrack
tok = call_errorfunc(self.errorfunc, errtoken, self)
File "/usr/local/lib/python3.10/dist-packages/xonsh/ply/ply/yacc.py", line 194, in call_errorfunc
r = errorfunc(token)
File "/usr/local/lib/python3.10/dist-packages/xonsh/parsers/base.py", line 3600, in p_error
self._parse_error(msg, self.currloc(lineno=p.lineno, column=p.lexpos))
File "/usr/local/lib/python3.10/dist-packages/xonsh/parsers/base.py", line 634, in _parse_error
raise_parse_error(msg, loc, self._source, self.lines)
File "/usr/local/lib/python3.10/dist-packages/xonsh/parsers/base.py", line 220, in raise_parse_error
raise err
File "<string>", line 2
echo bar
SyntaxError: ('code: ',)
```
</details>
## Steps to Reproduce
From bash:
```xsh
xonsh -c $'echo foo;\n echo bar'
```
From xonsh:
```xsh
xonsh -c 'echo foo;\n echo bar'
```
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
|
open
|
2024-03-24T17:11:27Z
|
2024-03-25T10:58:28Z
|
https://github.com/xonsh/xonsh/issues/5312
|
[
"parser",
"integration-with-other-tools"
] |
jahschwa
| 1
|
frol/flask-restplus-server-example
|
rest-api
| 158
|
Unable to build docker image from Dockerfile
|
It seems many APIs and packages are out of date so I cannot build an image all by myself from the dockerfile. Can you help to update it or show the correct version of each dependencies?
|
closed
|
2024-04-04T15:58:09Z
|
2024-04-06T02:07:17Z
|
https://github.com/frol/flask-restplus-server-example/issues/158
|
[] |
iridium-soda
| 2
|
miguelgrinberg/Flask-Migrate
|
flask
| 191
|
hello,python db_create.py db upgrade cmd error
|
sqlalchemy.exc.InternalError: (pymysql.err.InternalError) (1050, "Table 'userinfo' already exists") [SQL: '\nCREATE TABLE `UserInfo` (\n\tid INTEGER NOT NULL AUTO_INCREMENT, \n\t`userName` VARCHAR(80), \n\t`userPassword_hash` VARCHAR(120) NOT NULL, \n\t`userEmail` VARCHAR(120), \n\t`userNickName` VARCHAR(40) NOT NULL, \n\t`userIntroduction` VARCHAR(200), \n\t`userPhoneNum` INTEGER NOT NULL, \n\tPRIMARY KEY (id), \n\tUNIQUE (`userNickName`), \n\tUNIQUE (`userPhoneNum`)\n)\n\n'] (Background on this error at: http://sqlalche.me/e/2j85)
versions dir
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.create_table('UserInfo',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('userName', sa.String(length=80), nullable=True),
sa.Column('userPassword_hash', sa.String(length=120), nullable=False),
sa.Column('userEmail', sa.String(length=120), nullable=True),
sa.Column('userNickName', sa.String(length=40), nullable=False),
sa.Column('userIntroduction', sa.String(length=200), nullable=True),
sa.Column('userPhoneNum', sa.Integer(), nullable=False),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('userNickName'),
sa.UniqueConstraint('userPhoneNum')
)
op.drop_index('userNickName', table_name='userinfo')
op.drop_index('userPhoneNum', table_name='userinfo')
op.drop_table('userinfo')
# ### end Alembic commands ###
第二次更新数据库迁移,依然是create teble
|
closed
|
2018-03-12T13:48:23Z
|
2018-03-12T14:53:27Z
|
https://github.com/miguelgrinberg/Flask-Migrate/issues/191
|
[] |
MDreamStars
| 2
|
serengil/deepface
|
machine-learning
| 483
|
Deep face takes all GPU memory
|
OS: Ubuntu 20.04 LTS
CPU: Intel(R) Core(TM) i9-10900K CPU @ 3.70GHz
GPU: NVIDIA GeForce RTX 3090
RAM: 32G
Python version: 3.6.9
model: `VGG-Face`
detector backend: `opencv`
deep face version: 0.0.75
install: from pip
I was just try to run a simple face verify task with:
```python
from deepface import DeepFace
def main():
result = DeepFace.verify(
img1_path='./resources/img1.jpg',
img2_path='./resources/img2.jpg',
# enforce_detection=False
)
print(result)
if __name__ == '__main__':
main()
```
But when I take a look at the Nvidia-smi, I see that python takes all the GPU memory, about 22G.
Here is the output of nvidia-smi:
```
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.39.01 Driver Version: 510.39.01 CUDA Version: 11.6 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:01:00.0 On | N/A |
| 0% 52C P2 164W / 420W | 23926MiB / 24576MiB | 78% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1544 G /usr/lib/xorg/Xorg 72MiB |
| 0 N/A N/A 2884 G /usr/lib/xorg/Xorg 332MiB |
| 0 N/A N/A 3062 G /usr/bin/kwin_x11 170MiB |
| 0 N/A N/A 3072 G /usr/bin/plasmashell 78MiB |
| 0 N/A N/A 3107 G /usr/bin/latte-dock 79MiB |
| 0 N/A N/A 3152 G ...uangyan/telegram/Telegram 4MiB |
| 0 N/A N/A 110715 G ...875569653288587773,131072 152MiB |
| 0 N/A N/A 207004 G ...RendererForSitePerProcess 22MiB |
| 0 N/A N/A 227814 C .../envs/dl_pj_df/bin/python 22990MiB |
+-----------------------------------------------------------------------------+
```
And it seems like other program cannot allocate memory anymore, i.e. not a display bug.
|
closed
|
2022-05-19T22:37:16Z
|
2022-05-20T10:06:28Z
|
https://github.com/serengil/deepface/issues/483
|
[
"dependencies"
] |
ASSANDHOLE
| 1
|
pyjanitor-devs/pyjanitor
|
pandas
| 703
|
Add ability to generate boolean flags about data quality
|
I just came across ```pyjanitor``` as I was trying to do some research around publicly available data-cleaning libraries in Python. First of all, thanks a lot for all the effort that has been made in developing this package and the continued effort that goes into maintaining it.
I would like to propose the addition of a new segment of methods or functions- let us call it ```time_series_data_cleaning```, that add some metadata about the quality of the data and helps performing data cleaning very easy. If by any chance, you don't feel that these belong here, can you suggest if you are aware of any similar packages that you might have come across which may offer this functionality ?
# Example API
Here are some basic functions that I think would be valuable to have in this section:
1. Missing Timestamps - returns a boolean provided the resolution of the dataframe if any timestamps are found to be missing
2. Montonic Timestamps - returns a boolean provided the dates in the data frame are not monotonic
3. Duplicate Timestamps - returns a boolean flagging duplicates in the data frame with an optional argument to reindex the dataframe
4. Range - returns a boolean flagging values in the data frame that are outside a defined range
5. Stuck - returns a boolean flagging values in the data frame that are stuck for "n" timestamps where "n" is a user input
6. Jump - returns a boolean flagging values in the data frame that see arbitrary jumps from timestamp to timestamp, where an 'acceptable' range is provided by the user
```python
# Let us say we want to check for missing timestamps
df1 = df.check_missing_timestamps(column_name='')
# By default the function will check the index of the input data frame. In the event that column_name is provided,
# it will stop looking at the index and look at the defined column for timestamps
# df1 would have a single column assuming it is not a multi-index dataframe with a True to indicate a missing #timestamp
df1 = df.check_monotonic_timestamps(column_name='') # Check for monotonicity
df1 = df.check_duplicate_timestamps(reindex=True, keep="First")
# Will drop duplicates if flag is True and use keep to determine if the first occurrence or the last occurrence will #be retained
# more examples below
df1 = df.check_if_data_in_range(bound=[lower_limit, upper_limt])
# df1 will have the same number of columns as df except that they will be booleans
```
This package (https://pecos.readthedocs.io/en/latest/installation.html) offers some of the functionality I am talking about, however some of the filters are slow and don't have some of the great extensibility that pyjanitor already has.
|
closed
|
2020-07-21T23:37:47Z
|
2020-07-28T17:24:08Z
|
https://github.com/pyjanitor-devs/pyjanitor/issues/703
|
[
"triage"
] |
UGuntupalli
| 11
|
lundberg/respx
|
pytest
| 228
|
Router does not honor `assert_all_called`
|
I'm using the router like so:
```python
import pytest
import httpx
import respx
from syncer.workitem import WorkItem
@pytest.fixture
def router():
router = respx.Router(base_url="https://dev.azure.com/")
router.get(
path__regex=r"\/workitems\/\d+$",
name="get",
)
router.get(
path__regex=r"\/workitems\/\d+\/comments$",
name="comments-get",
)
router.post(
path__regex=r"\/workitems\/\d+\/comments$",
name="comments-post",
)
router.patch(
path__regex=r"\/workitems\/\d+\/comments\/d+$",
name="comments-patch",
)
return router
@pytest.fixture
def http_client(router):
mock_transport = httpx.MockTransport(router.async_handler)
return httpx.AsyncClient(transport=mock_transport)
@pytest.fixture
def workitem(http_client):
def factory():
return WorkItem("test_org", http_client)
return factory
async def test_fetch(router, workitem):
router["get"].respond(json={})
await workitem().fetch()
assert router.routes["get"].call_count == 1
assert router.routes["comments-get"].call_count == 0
assert router.routes["comments-post"].call_count == 0
assert router.routes["comments-patch"].call_count == 0
```
The test passes, but I would expect it to fail since I'm not calling all mocked routes (which the test itself asserts).
What I'm doing wrong or missunderstanding?
<details><summary>Result of "pip freeze"</summary>
<p>
anyio==3.6.2
assertpy==1.1
async-generator==1.10
attrs==23.1.0
azure-core==1.26.4
azure-servicebus==7.9.0
black==23.3.0
certifi==2022.12.7
charset-normalizer==3.1.0
click==8.1.3
exceptiongroup==1.1.1
flake8==6.0.0
flake8-bugbear==23.3.23
Flake8-pyproject==1.2.3
h11==0.14.0
httpcore==0.17.0
httpx==0.24.0
idna==3.4
iniconfig==2.0.0
isodate==0.6.1
mccabe==0.7.0
mypy-extensions==1.0.0
outcome==1.2.0
packaging==23.1
pathspec==0.11.1
pep8-naming==0.13.3
platformdirs==3.5.0
pluggy==1.0.0
pycodestyle==2.10.0
pyflakes==3.0.1
pytest==7.3.1
pytest-mock==3.10.0
pytest-trio==0.8.0
requests==2.30.0
respx==0.20.1
six==1.16.0
sniffio==1.3.0
sortedcontainers==2.4.0
-e git+https://afaforsakring2@dev.azure.com/afaforsakring2/Utvecklingsstod/_git/syncer@611d0bcb65a906f82c0927725347140eab791774#egg=syncer
tomli==2.0.1
trio==0.22.0
typing_extensions==4.5.0
uamqp==1.6.4
urllib3==2.0.2
</p>
</details>
FWIW I'm using python 3.10.6 running in WSL2 (Ubuntu 22.04) on Windows 10.
|
closed
|
2023-05-04T20:23:27Z
|
2023-07-21T14:22:49Z
|
https://github.com/lundberg/respx/issues/228
|
[] |
BeyondEvil
| 5
|
deezer/spleeter
|
deep-learning
| 187
|
[Discussion] your question: confuse about GPU when Itry to train the model .
|
<!-- Please respect the title [Discussion] tag. -->
the command is :CUDA_VISIBLE_DEVICES='1' python __main__.py train -p /home/tae/spleeter_master/configs/4stems/base_config.json -d /home/tae/musdb18hq , using the tensorflow-gpu,But the utilization ratio is zero.How can I use the GPU to train my own model through the __main()__?
|
closed
|
2019-12-17T11:49:40Z
|
2019-12-30T10:12:13Z
|
https://github.com/deezer/spleeter/issues/187
|
[
"question"
] |
DaerTaeKook
| 1
|
PaddlePaddle/ERNIE
|
nlp
| 372
|
关于num_seqs Variable的一个小问题
|
请问finetune/classifier.py中的num_seqs是一个var tensor 也不在feedlist中 怎么就直接获取了batch_size
|
closed
|
2019-11-22T06:39:07Z
|
2020-05-28T09:52:37Z
|
https://github.com/PaddlePaddle/ERNIE/issues/372
|
[
"wontfix"
] |
wq343580510
| 2
|
allure-framework/allure-python
|
pytest
| 404
|
Can't use allure report with pytest under mac
|
Hi, I'm having trouble with allure reports using pytest. It seems allure command is recognized but somehow when I add it in the pytest command as a parameter it causes an error and exits. Maybe I'm missing something, here's command line:
`pytest.main("--maxfail=50 /Users/distiller/project/tests --alluredir=/Users/distiller/project/tests/build/reports")`
If I use `--junitxml` instead it works.
I'm triggering tests under a remote Mac using CircleCi tool. Here's mac build configuration:
```
version: 2
jobs:
build:
macos:
xcode: "10.1.0"
steps:
- checkout
- run:
halt_build_on_fail: false
name: Setup environment for test
command: |
xcodebuild -version
pip install -U selenium
pip install --upgrade pip
pip install -U webium
pip install pytest
pip install -U pytest-allure-adaptor
pip install pytest-html
pip install pyperclip==1.5.27
pip install seleniumwrapper
pip install pycrypto
pip install requests
pip install -U py2app
brew install pigz
python -c "import selenium; print(selenium.__version__)"
brew install allure
sudo /usr/bin/safaridriver --enable
- run:
name: Make Unit Test
command: |
cd tools && chmod a+x tests_runner.py
py2applet --make-setup tests_runner.py
python setup.py py2app -A
open -a /Users/distiller/project/tools/dist/tests_runner.app
testrunner=$(eval "ps -ef | grep -v "grep" | grep 'tests_runner' | wc -l")
while [ "$testrunner" != 0 ]
do
testrunner=$(eval "ps -ef | grep -v "grep" | grep 'tests_runner' | wc -l")
echo "running tests.."
sleep 5
done
no_output_timeout: 20m
- run:
name: Allure Generate
when: always
command: |
allure generate -o /Users/distiller/project/tests/build/reports/ /Users/distiller/project/tests/build/reports/
mv /Users/distiller/project/tests/build/reports/ /tmp/app
- store_artifacts:
path: /tmp/app
```
I'm using:
```
- Pytest: 4.6.4
- Allure version: 2.12.1
- pytest-allure-adaptor: 1.7.10
- Selenium: 3.141
```
Attaching console log I got after attempting test run with allure:
[console remote error.txt](https://github.com/allure-framework/allure-python/files/3398478/console.remote.error.txt)
I'd appreciate if you can help me with this, I'm pretty sure I'm missing some configuration detail..
Thanks in advance!
Regards,
Ashley
|
closed
|
2019-07-16T18:07:54Z
|
2019-07-19T21:06:48Z
|
https://github.com/allure-framework/allure-python/issues/404
|
[] |
ashea
| 20
|
jacobgil/pytorch-grad-cam
|
computer-vision
| 270
|
The color of the background
|
Thanks for your work.
I use the code of [EigenCAM for YOLO5.ipynb]
I test one image by using your example. But the color of the feature map for the background is red. In your example, the background should be blue.
the result looks as follows:
I want to know the reason
<img width="1726" alt="result" src="https://user-images.githubusercontent.com/43560407/174472309-170c2f89-59b0-49fa-a293-6de59f87ec96.png">
|
closed
|
2022-06-19T08:24:57Z
|
2023-08-22T18:36:40Z
|
https://github.com/jacobgil/pytorch-grad-cam/issues/270
|
[] |
Dongjiuqing
| 3
|
qubvel-org/segmentation_models.pytorch
|
computer-vision
| 37
|
No module named 'segmentation_models_pytorch.common.blocks'
|
Hi,
I'm working in a internet restricted system. I've installed segmentation_models.pytorch using source code using `pip install ..`
Now when I try to import it, I get following error:
```
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-3-b9e13fa886e0> in <module>
----> 1 import segmentation_models_pytorch as smp
/opt/conda/lib/python3.6/site-packages/segmentation_models_pytorch/__init__.py in <module>
----> 1 from .unet import Unet
2 from .linknet import Linknet
3 from .fpn import FPN
4 from .pspnet import PSPNet
5
/opt/conda/lib/python3.6/site-packages/segmentation_models_pytorch/unet/__init__.py in <module>
----> 1 from .model import Unet
/opt/conda/lib/python3.6/site-packages/segmentation_models_pytorch/unet/model.py in <module>
----> 1 from .decoder import UnetDecoder
2 from ..base import EncoderDecoder
3 from ..encoders import get_encoder
4
5
/opt/conda/lib/python3.6/site-packages/segmentation_models_pytorch/unet/decoder.py in <module>
3 import torch.nn.functional as F
4
----> 5 from ..common.blocks import Conv2dReLU
6 from ..base.model import Model
7
ModuleNotFoundError: No module named 'segmentation_models_pytorch.common.blocks'
```
Any ideas how this error can be solved?
|
closed
|
2019-08-06T19:13:09Z
|
2019-10-15T15:04:03Z
|
https://github.com/qubvel-org/segmentation_models.pytorch/issues/37
|
[] |
pyaf
| 0
|
ipython/ipython
|
data-science
| 13,875
|
Annoying venv warning
|
Hi!
Using `ipython` 8.7.0 I just identified that the block below always triggers the annoying warning when I run `ipython` within a `pyenv` venv.
https://github.com/ipython/ipython/blob/d38397b078b744839b8510f7ac9ab4fa40450a4f/IPython/core/interactiveshell.py#L886-L890
The instance variable `warn_venv` occurs two times in the whole module (here both locations in the snippets cutout) and seems to be always true.
https://github.com/ipython/ipython/blob/d38397b078b744839b8510f7ac9ab4fa40450a4f/IPython/core/interactiveshell.py#L511-L514
I see no point to always get that message although I clearly run the `ipython` executable in the currently active venv.
Maybe my symlink setup causes that?
Here you see something in my `neovim` session.
Top pane: the module where above code listings come from
Middle pane: vars in a debug session in `pdb`
Bottom pane: my symlink of `$HOME/.pyenv` and the env var `VIRTUAL_ENV`
Maybe someone wants to look deeper into that scenario. For now it looks like a warning that should not appear in this case and I disable it.
I wish a Merry Christmas,
Jamil
<img width="1782" alt="image" src="https://user-images.githubusercontent.com/45113557/209349729-588e865b-f3be-472d-bea3-3e228d4b7a6d.png">
|
open
|
2022-12-23T14:16:15Z
|
2023-10-09T15:08:51Z
|
https://github.com/ipython/ipython/issues/13875
|
[] |
jamilraichouni
| 7
|
zappa/Zappa
|
flask
| 792
|
[Migrated] Issue while deploying Django project
|
Originally from: https://github.com/Miserlou/Zappa/issues/1943 by [CapturedCarbon](https://github.com/CapturedCarbon)
<!--- Provide a general summary of the issue in the Title above -->
Fails while deploy a django project
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug --
I have matched the dependencies, tried all possible suggestions but it doesn't seem to work
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 2.7/3.6 -->
## Expected Behavior
Deployment complete!:
## Actual Behavior
Packaging project as gzipped tarball.
ERROR: To modify pip, please run the following command:
C:\Users\asds\newenv\Scripts\python.exe -m pip install --quiet --target C:\Users\asds\PycharmProjects\project\handler_venv\Lib\site-packages chardet==3.0.4 wheel==0.33.6 six==1.11.0 Werkzeug==0.16.0 Click==7.0 python-dateutil==2.6.1 toml==0.10.0
certifi==2018.4.16 idna==2.7 kappa==0.6.0 requests==2.20.1 placebo==0.9.0 lambda-packages==0.20.0 jmespath==0.9.3 urllib3==1.24.3 botocore==1.12.215 wsgi-request-logger==0.4.6 cfn-flip==1.2.1 docutils==0.15.2 durationpy==0.5 python-slugify==1.2.4 pip==19.3 s3tr
ansfer==0.2.1 boto3==1.9.215 future==0.16.0 troposphere==2.5.2 hjson==3.0.1 argcomplete==1.9.3 zappa==0.48.2 tqdm==4.19.1 PyYAML==5.1.2 Unidecode==1.1.1 setuptools
Oh no! An error occurred! :(
==============
Traceback (most recent call last):
File "c:\users\asds\newenv\lib\site-packages\zappa\cli.py", line 2779, in handle
sys.exit(cli.handle())
File "c:\users\asds\newenv\lib\site-packages\zappa\cli.py", line 509, in handle
self.dispatch_command(self.command, stage)
File "c:\users\asds\newenv\lib\site-packages\zappa\cli.py", line 546, in dispatch_command
self.deploy(self.vargs['zip'])
File "c:\users\asds\newenv\lib\site-packages\zappa\cli.py", line 718, in deploy
self.create_package()
File "c:\users\asds\newenv\lib\site-packages\zappa\cli.py", line 2225, in create_package
venv=self.zappa.create_handler_venv(),
File "c:\users\asds\newenv\lib\site-packages\zappa\core.py", line 435, in create_handler_venv
raise EnvironmentError("Pypi lookup failed")
OSError: Pypi lookup failed
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.48.2
* Operating System and Python version: windows : Python 3.6.4
* The output of `pip freeze`:
* Link to your project (optional):
* Your `zappa_settings.py`:
|
closed
|
2021-02-20T12:42:30Z
|
2022-07-16T06:16:20Z
|
https://github.com/zappa/Zappa/issues/792
|
[] |
jneves
| 1
|
sqlalchemy/alembic
|
sqlalchemy
| 1,230
|
postgresql ExcludeConstraint behave differently with --autogenerate compare to sqlalchemy Base.metadata.create_all
|
**Describe the bug**
when running `alembic revision --autogenerate`, on a table with
``` python
ExcludeConstraint(
(func.tstzrange(effective_time, expiry_time), "&&"),
using="gist",
)
```
it generates
``` python
postgresql.ExcludeConstraint(('tstzrange(effective_time, expiry_time)', '&&'), using='gist'),
```
in the revision script, resulting in `sqlalchemy.exc.ConstraintColumnNotFoundError: Can't create ExcludeConstraint on table 'time_slot': no column named 'tstzrange(effective_time, expiry_time)' is present.`
Adding `sqlalchemy.text` around the string does fix the issue
**Expected behavior**
the table definition works with `sqlalchemy`'s `Base.metadata.create_all(bind=engine)`, creating
```
"time_slot_tstzrange_excl" EXCLUDE USING gist (tstzrange(effective_time, expiry_time) WITH &&)
```
in the database so I expect it works the same in alembic
**To Reproduce**
I setup a minimal example in this repo: https://github.com/tc-yu/alembic-pgsql-tztsrange
model:
```py
from datetime import datetime
from sqlalchemy import DateTime, Identity, func
from sqlalchemy.dialects.postgresql import ExcludeConstraint
from sqlalchemy.orm import Mapped, declarative_base, mapped_column
class TimeSlot(Base):
__tablename__ = "time_slot"
id: Mapped[int] = mapped_column(Identity(), primary_key=True)
effective_time: Mapped[datetime | None] = mapped_column(
DateTime(timezone=True), index=True
)
expiry_time: Mapped[datetime | None] = mapped_column(
DateTime(timezone=True), index=True
)
__table_args__ = (
ExcludeConstraint(
(func.tstzrange(effective_time, expiry_time), "&&"),
using="gist",
),
)
```
**Error**
```
sqlalchemy.exc.ConstraintColumnNotFoundError: Can't create ExcludeConstraint on table 'time_slot': no column named 'tstzrange(effective_time, expiry_time)' is present.
```
**Versions.**
- OS: macOS
- Python: 3.11.3
- Alembic: 1.10.4
- SQLAlchemy: 2.0.11
- Database: postgresql 15.2
- DBAPI: psycopg 3.1.8
**Additional context**
This is the first time I work with ExcludeConstraint, so it might just be me using the wrong way to define it in the model
**Have a nice day!**
|
closed
|
2023-04-28T05:46:53Z
|
2023-05-03T13:19:24Z
|
https://github.com/sqlalchemy/alembic/issues/1230
|
[
"bug",
"autogenerate - rendering",
"postgresql"
] |
tc-yu
| 2
|
ageitgey/face_recognition
|
python
| 793
|
Dlib has no attribute get frontal face_detector
|
* face_recognition version:1.23.
* Python version:3.6.5
* Operating System:windows 7
### Description
Describe what you were trying to get done.
Tell us what happened, what went wrong, and what you expected to happen.
IMPORTANT: If your issue is related to a specific picture, include it so others can reproduce the issue.
### What I Did
```
Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.
```
|
open
|
2019-04-03T08:43:51Z
|
2019-06-20T10:10:45Z
|
https://github.com/ageitgey/face_recognition/issues/793
|
[] |
HemanthKumarGadi
| 1
|
Yorko/mlcourse.ai
|
scikit-learn
| 709
|
Issue on page /book/topic02/topic02_visual_data_analysis.html
|
item 5 is missing.

https://github.com/Yorko/mlcourse.ai/blob/54bca0f807481af872ca3245225c5084df5375d1/book/topic02/topic02_visual_data_analysis.html#L806
|
closed
|
2022-07-15T15:57:26Z
|
2022-08-27T17:37:44Z
|
https://github.com/Yorko/mlcourse.ai/issues/709
|
[] |
KingAndJoker
| 0
|
ultrafunkamsterdam/undetected-chromedriver
|
automation
| 860
|
Disable cookies
|
How can I disable cookies?
|
closed
|
2022-10-27T13:22:32Z
|
2023-07-01T09:25:55Z
|
https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/860
|
[] |
MazenTayseer
| 1
|
huggingface/datasets
|
deep-learning
| 6,720
|
TypeError: 'str' object is not callable
|
### Describe the bug
I am trying to get the HPLT datasets on the hub. Downloading/re-uploading would be too time- and resource consuming so I wrote [a dataset loader script](https://huggingface.co/datasets/BramVanroy/hplt_mono_v1_2/blob/main/hplt_mono_v1_2.py). I think I am very close but for some reason I always get the error below. It happens during the clean-up phase where the directory cannot be removed because it is not empty.
My only guess would be that this may have to do with zstandard
```
Traceback (most recent call last):
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1744, in _prepare_split_single
writer.write(example, key)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 492, in write
self.write_examples_on_file()
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 434, in write_examples_on_file
if self.schema
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 409, in schema
else (pa.schema(self._features.type) if self._features is not None else None)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1643, in type
return get_nested_type(self)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1209, in get_nested_type
{key: get_nested_type(schema[key]) for key in schema}
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1209, in <dictcomp>
{key: get_nested_type(schema[key]) for key in schema}
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1221, in get_nested_type
value_type = get_nested_type(schema.feature)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1228, in get_nested_type
return schema()
TypeError: 'str' object is not callable
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1753, in _prepare_split_single
num_examples, num_bytes = writer.finalize()
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 588, in finalize
self.write_examples_on_file()
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 434, in write_examples_on_file
if self.schema
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 409, in schema
else (pa.schema(self._features.type) if self._features is not None else None)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1643, in type
return get_nested_type(self)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1209, in get_nested_type
{key: get_nested_type(schema[key]) for key in schema}
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1209, in <dictcomp>
{key: get_nested_type(schema[key]) for key in schema}
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1221, in get_nested_type
value_type = get_nested_type(schema.feature)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1228, in get_nested_type
return schema()
TypeError: 'str' object is not callable
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 959, in incomplete_dir
yield tmp_dir
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1005, in download_and_prepare
self._download_and_prepare(
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1767, in _download_and_prepare
super()._download_and_prepare(
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1100, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1605, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1762, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/pricie/vanroy/.config/JetBrains/PyCharm2023.3/scratches/scratch_5.py", line 4, in <module>
ds = load_dataset(
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/load.py", line 2549, in load_dataset
builder_instance.download_and_prepare(
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 985, in download_and_prepare
with incomplete_dir(self._output_dir) as tmp_output_dir:
File "/home/pricie/vanroy/.pyenv/versions/3.10.13/lib/python3.10/contextlib.py", line 153, in __exit__
self.gen.throw(typ, value, traceback)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 966, in incomplete_dir
shutil.rmtree(tmp_dir)
File "/home/pricie/vanroy/.pyenv/versions/3.10.13/lib/python3.10/shutil.py", line 731, in rmtree
onerror(os.rmdir, path, sys.exc_info())
File "/home/pricie/vanroy/.pyenv/versions/3.10.13/lib/python3.10/shutil.py", line 729, in rmtree
os.rmdir(path)
OSError: [Errno 39] Directory not empty: '/home/pricie/vanroy/.cache/huggingface/datasets/BramVanroy___hplt_mono_v1_2/ky/1.2.0/7ab138629fe7e9e29fe93ce63d809d5ef9d963273b829f61ab538e012dc9cc47.incomplete'
```
Interestingly, though, this directory _does_ appear to be empty:
```shell
> cd /home/pricie/vanroy/.cache/huggingface/datasets/BramVanroy___hplt_mono_v1_2/ky/1.2.0/7ab138629fe7e9e29fe93ce63d809d5ef9d963273b829f61ab538e012dc9cc47.incomplete
> ls -lah
total 0
drwxr-xr-x. 1 vanroy vanroy 0 Mar 7 12:01 .
drwxr-xr-x. 1 vanroy vanroy 304 Mar 7 11:52 ..
> cd ..
> ls
7ab138629fe7e9e29fe93ce63d809d5ef9d963273b829f61ab538e012dc9cc47_builder.lock 7ab138629fe7e9e29fe93ce63d809d5ef9d963273b829f61ab538e012dc9cc47.incomplete
```
### Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset(
"BramVanroy/hplt_mono_v1_2",
"ky",
trust_remote_code=True
)
```
### Expected behavior
No error.
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-5.14.0-284.25.1.el9_2.x86_64-x86_64-with-glibc2.34
- Python version: 3.10.13
- `huggingface_hub` version: 0.20.2
- PyArrow version: 14.0.1
- Pandas version: 2.1.3
- `fsspec` version: 2023.10.0
|
closed
|
2024-03-07T11:07:09Z
|
2024-03-08T07:34:53Z
|
https://github.com/huggingface/datasets/issues/6720
|
[] |
BramVanroy
| 2
|
rio-labs/rio
|
data-visualization
| 144
|
Add Hover Height to `Card`
|
For consistency, add a hover height effect for `Card's` similar to the one in the `Rectangle` component. Same as for Rectangle: e.g. shadow_radius=3
|
open
|
2024-09-18T14:41:34Z
|
2024-09-18T14:41:41Z
|
https://github.com/rio-labs/rio/issues/144
|
[
"enhancement"
] |
Sn3llius
| 0
|
django-import-export/django-import-export
|
django
| 1,834
|
Breaking change due to _check_import_id_fields
|
**Describe the bug**
This is a breaking change in 4.0 vs 3.x.
In my resource, I've declared a `before_import_row()` to add another column 'Object' to the row, which is used as an identifier.
```
object = resources.Field(column_name="Object", attribute="object")
def before_import_row(self, row, **kwargs):
....
row["Object"] = hashlib.sha256((row.get("Object 1 Name") or "").encode()).hexdigest()
...
class Meta:
...
import_id_fields = ["time", "object"]
```
However, after upgrading to 4.0.2, the import throws an error
```
File "***/python3.12/site-packages/import_export/resources.py", line 1186, in _check_import_id_fields
raise exceptions.FieldError(
import_export.exceptions.FieldError: The following fields are declared in 'import_id_fields' but are not present in the file headers: Object
```
This check on headers seems not taking into accounts that the rows can change.
**Versions (please complete the following information):**
- Django Import Export: 4.0.2
- Python 3.12
- Django 5.0.6
|
closed
|
2024-05-14T17:27:21Z
|
2024-05-15T08:48:42Z
|
https://github.com/django-import-export/django-import-export/issues/1834
|
[
"bug"
] |
jameslao
| 2
|
pyeve/eve
|
flask
| 1,200
|
Add GitHub metadata and documentation link to PyPI
|
For reference, take a look at [Flask page on PyPI](http://docs.python-eve.org/en/latest/changelog.html)
<img width="297" alt="image" src="https://user-images.githubusercontent.com/512968/46522611-8e642980-c883-11e8-9532-11162d773b98.png">
|
closed
|
2018-10-05T07:47:15Z
|
2019-04-11T09:01:48Z
|
https://github.com/pyeve/eve/issues/1200
|
[
"enhancement"
] |
nicolaiarocci
| 3
|
robotframework/robotframework
|
automation
| 4,864
|
Process: Make warning about processes hanging if output buffers get full more visible
|
Related to #3661,
Before finding the related issue above, my binary hung when using the `RUN PROCESS`. It has consumed a lot of my time investigating whether the robot test has found a bug or not. I found out that it is a limitation or weird behavior on RF based on the related issue. It could really help other developers to note this limitation, to avoid misunderstanding the test, and to avoid unnecessary investigation. Thank you in advance.
RobotFramework Version: 6.1.1
Actions to resolve the issue:
- [ ] Add this limitation or behavior in the Process Library Documentation
- [ ] Fix the bug ( if possible )
|
closed
|
2023-09-11T10:30:57Z
|
2023-11-07T09:14:55Z
|
https://github.com/robotframework/robotframework/issues/4864
|
[
"enhancement",
"priority: low",
"alpha 1",
"effort: small"
] |
pat0026
| 3
|
plotly/dash-recipes
|
dash
| 11
|
download-raw-data.py doesn't work in IE11
|
URI links aren't allowed in IE11, so the download CSV links don't work. Is there an alternative that is easy to implement with Dash? Maybe some of the answers from [this stackoverflow question](https://stackoverflow.com/q/3916191/6068036) would help, like using another JavaScript library like downloadify.js or download.js.
|
closed
|
2018-06-22T14:33:00Z
|
2018-06-22T14:40:46Z
|
https://github.com/plotly/dash-recipes/issues/11
|
[] |
amarvin
| 4
|
pandas-dev/pandas
|
pandas
| 60,954
|
BUG: Segmentation Fault when changing a column name in a DataFrame
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import re
import uuid
import numpy as np
import pandas as pd
## Generate example DataFrame
t = pd.date_range(start='2023-01-01 00:00', periods=10, freq='10min')
x = np.random.randn(t.size)
y = np.random.randn(t.size)
df = pd.DataFrame({
'Timestamp': t,
'X position (m)': x,
'Y position (m)': y,
'Temperature (degC)': temp,
})
df = pd.concat([
pd.DataFrame(
[dict(
zip(list(df.columns),
['SignalId'] + [str(uuid.uuid4()) for i in range(df.columns.size - 1)]
))]
),
df], ignore_index=True
)
df = df.set_index('Timestamp')
## Change column name inplace
for i, c in enumerate(list(df.columns)):
newc = re.sub(r'\s+position\s+', ' ', c)
df.columns.values[i] = newc
## Printing DataFrame to screen may generate a segmentation fault
df
```
### Issue Description
When a column name from a DataFrame is changed inplace (at the values), sometimes it leads to a *segmentation fault*. This seems more likely if the DataFrame contains mixed element types (as per example below).
Hypotheses are:
- The change in the name leads to corruption of the data in memory.
- NumPy version >2 leads to different data types that may conflict somehow with some operations.
Example:
```python
>>> import re
>>> import uuid
>>> import numpy as np
>>> import pandas as pd
>>>
>>> t = pd.date_range(start='2023-01-01 00:00', periods=10, freq='10min')
>>> x = np.random.randn(t.size)
>>> y = np.random.randn(t.size)
>>> temp = np.random.randn(t.size)
>>> df = pd.DataFrame({
... 'Timestamp': t,
... 'X position (m)': x,
... 'Y position (m)': y,
... 'Temperature (degC)': temp,
... })
>>> df = pd.concat([
... pd.DataFrame(
... [dict(
... zip(list(df.columns),
... ['SignalId'] + [str(uuid.uuid4()) for i in range(df.columns.size - 1)]
... ))]
... ),
... df], ignore_index=True
... )
>>> df = df.set_index('Timestamp')
>>>
>>> df
X position (m) Y position (m) Temperature (degC)
Timestamp
SignalId da8a0a1b-a022-48cc-9e17-91b4b103cc5b e92dad78-6128-45d5-8545-b45e80345da9 3106111b-0f53-4122-a89f-e1f78aac72b9
2023-01-01 00:00:00 1.66612 0.503874 -0.202982
2023-01-01 00:10:00 -1.266542 0.141686 0.488124
2023-01-01 00:20:00 -0.46789 -0.132084 -1.011771
2023-01-01 00:30:00 1.276952 -0.811061 -1.735414
2023-01-01 00:40:00 1.178987 -0.245169 1.295712
2023-01-01 00:50:00 -1.503673 0.60517 -0.946938
2023-01-01 01:00:00 -1.095622 -0.920928 -0.233186
2023-01-01 01:10:00 -1.276511 0.710022 1.94653
2023-01-01 01:20:00 -0.470105 -0.643144 1.380882
2023-01-01 01:30:00 1.426826 -0.286228 1.351435
>>> for i, c in enumerate(list(df.columns)):
... newc = re.sub(r'\s+position\s+', ' ', c)
... df.columns.values[i] = newc
...
>>> df
Segmentation fault (core dumped)
```
### Expected Behavior
Though the operation may be debatable (the change inplace of the column name via `df.column.values[i] = new_name`), it is a valid operation without any other warning or error message. The ensuing segmentation fault is completely random (so very hard to diagnose).
Hence the expected behaviour is to either block these operations, or alternatively to fully allow those if these are to be permitted.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.11.11
python-bits : 64
OS : Linux
OS-release : 4.19.0-27-amd64
Version : #1 SMP Debian 4.19.316-1 (2024-06-25)
machine : x86_64
processor :
byteorder : little
LC_ALL : en_US.UTF-8
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 2.0.2
pytz : 2025.1
dateutil : 2.9.0.post0
pip : 24.0
Cython : None
sphinx : None
IPython : 8.18.1
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : 8.3.4
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.13.1
sqlalchemy : None
tables : None
tabulate : None
xarray : 2024.7.0
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.1
qtpy : None
pyqt5 : None
</details>
|
closed
|
2025-02-18T10:07:42Z
|
2025-03-11T16:49:36Z
|
https://github.com/pandas-dev/pandas/issues/60954
|
[
"Docs",
"Index"
] |
cvr
| 8
|
plotly/dash
|
flask
| 2,359
|
Auto sizing a single axis not properly handled
|
**Describe your context**
Plotly 5.11.0
Dash 2.7.0
Python 3.10
Dockerfile:
```
FROM debian:bullseye
RUN apt update && apt install -y --no-install-recommends python3 python3-pip
RUN pip install plotly==5.11.0 dash==2.7.0 dash-bootstrap-components==1.2.1 pandas==1.5.2
COPY swap_charts.py /tmp
EXPOSE 8050
CMD python3 /tmp/swap_charts.py
```
**Describe the bug**
My application is a simple report editor which is a column of charts and markdown where each chart or markdown can be selected and moved up or down in the column. To move charts up and down I have a callback which swaps adjacent children in the containing element.
When both the width and height of a chart are set to explicit sizes or when they are both None the swap works fine. When the height is set and width is None (so that width is auto sized), the swap results in swapped charts but sizes remain as they were.
The following is the mwe for the problem:
```
import json
from random import randint
import dash
from dash import Dash, dcc, html, Output, Input, State
import plotly.express as px
@dash.callback(
Output("chart-container", "children"),
Input("swap-button", "n_clicks"),
State("chart-container", "children"),
prevent_initial_call=True,
)
def update_chart(unused, charts):
"""Swap pairs of charts"""
charts[0], charts[1] = charts[1], charts[0]
charts[2], charts[3] = charts[3], charts[2]
return charts
def layout():
"""Create 2 charts with height and 2 charts with width and height"""
graphs = []
for i in range(4):
fig = px.bar(
x=list(range(5)),
y=[randint(1, 10) for _ in range(5)],
title=f"Chart {i}"
)
width = 400 if i > 1 else None
fig.update_layout(height=200*(i%2 + 1), width=width)
graphs.append(dcc.Graph(figure=fig))
chart = html.Div(children=graphs, id="chart-container")
button = html.Button("Swap", id="swap-button")
layout = html.Div([chart, button])
return layout
if __name__ == "__main__":
app = Dash()
app.layout = layout()
app.run_server(host="0.0.0.0", debug=True)
```
The following image shows the original order and sizes of the charts. Charts 0 and 2 have a height of 200px while charts 1 and 3 have a height of 400px. Charts 0 and 1 have a width of None while charts 2 and 3 have a width of 400px.

Swapping charts 0 and 1 and charts 2 and 3 results in correct sizes for charts 2 and 3 but the wrong sizes for charts 1 and 0. Although charts 1 and 0 are swapped, their sizes are not.

**Expected behavior**
When swapping child elements containing plotly charts the proper sizes should be retained.
|
open
|
2022-12-08T15:50:48Z
|
2024-08-13T19:24:13Z
|
https://github.com/plotly/dash/issues/2359
|
[
"bug",
"P3"
] |
mneilly
| 1
|
iterative/dvc
|
machine-learning
| 10,703
|
`dvc pull` takes ~20 minutes with no console output for 15 minutes unless run with `-v -v`
|
# Bug Report: `dvc pull` Takes ~20 Minutes With No Progress Unless Using Double Verbosity
## Description
We have a repository that stores about **31GB** of data in a Google Cloud Storage remote. Approximately **15GB** comes from ~60 main data files, and another **16GB** is apparently from 33 “temp” files that do not seem to be actively used.
When running `dvc pull`, the command takes around **20 minutes** to complete. Most of that time (~15 minutes) shows **no console output**. Only when using `-v -v` do we see logs during this period.
With `-v -v`, DVC displays repeated trace messages about “stage collection” in blocks of 30 lines, each block taking ~30 seconds, repeated almost 30 times in a row. For example:
```
2025-03-11 21:09:47,135 TRACE: Context during resolution of stage create-chassis-projects:
{'paths': {'raw': {'jaguars': 'raw/chevrolet/jaguar-chassis-1.9M.csv'}}}
2025-03-11 21:09:47,141 TRACE: Context during resolution of stage unpack-dataset1:
{'paths': {'raw': {'jaguars': 'raw/chevrolet/jaguar-chassis-1.9M.csv'}}}
2025-03-11 21:09:47,146 TRACE: Context during resolution of stage create-muffler-projects:
{'paths': {'raw': {'jaguars': 'raw/chevrolet/jaguar-chassis-1.9M.csv'}}}
2025-03-11 21:09:47,159 TRACE: Context during resolution of stage update-chassis-curation:
{'paths': {'raw': {'jaguars': 'raw/chevrolet/jaguar-chassis-1.9M.csv'}}}
2025-03-11 21:09:47,171 TRACE: Context during resolution of stage update-muffler-curation:
{'paths': {'raw': {'jaguars': 'raw/chevrolet/jaguar-chassis-1.9M.csv'}}}
2025-03-11 21:09:47,190 TRACE: Context during resolution of stage create-chassis-tag-projects:
{'paths': {'raw': {'jaguars': 'raw/chevrolet/jaguar-chassis-1.9M.csv'}}}
2025-03-11 21:09:47,200 TRACE: 113.24 ms in collecting stages from /
2025-03-11 21:09:47,202 TRACE: 6.73 mks in collecting stages from /annotation
2025-03-11 21:09:47,204 TRACE: 9.45 mks in collecting stages from /annotation/chassis
2025-03-11 21:09:47,206 TRACE: 9.99 mks in collecting stages from /annotation/muffler
2025-03-11 21:09:47,207 TRACE: 10.32 mks in collecting stages from /curation
2025-03-11 21:09:47,207 TRACE: 4.05 mks in collecting stages from /curation/chassis
2025-03-11 21:09:47,208 TRACE: 6.49 mks in collecting stages from /curation/muffler
2025-03-11 21:09:47,209 TRACE: 8.16 mks in collecting stages from /inception
2025-03-11 21:09:47,209 TRACE: 6.80 mks in collecting stages from /inception/export
2025-03-11 21:09:47,210 TRACE: 11.79 mks in collecting stages from /inception/export/layer
2025-03-11 21:09:47,210 TRACE: 6.00 mks in collecting stages from /inception/export/schema
2025-03-11 21:09:47,211 TRACE: 12.09 mks in collecting stages from /inception/export/tagset
2025-03-11 21:09:47,216 TRACE: 31.29 mks in collecting stages from /inception/guidelines
2025-03-11 21:09:47,216 TRACE: 11.20 mks in collecting stages from /project
2025-03-11 21:09:47,217 TRACE: 7.86 mks in collecting stages from /raw
2025-03-11 21:09:47,218 TRACE: 21.03 mks in collecting stages from /raw/chassis
2025-03-11 21:10:17,196 TRACE: 29.98 s in collecting stages from /raw/muffler
2025-03-11 21:10:17,197 TRACE: 9.75 mks in collecting stages from /scripts
```
These repeated logs consume the majority of the command’s execution time.
## Expected Behavior
1. `dvc pull` should complete in a few minutes to download the ~15GB of actively used data from Google Cloud Storage.
2. `dvc pull` should print intermittent progress messages or updates, rather than appearing “stuck” for 15 minutes with no output unless `-v -v` is used.
## Observed Behavior
1. The command runs for ~20 minutes in total.
2. Minimal (nearly no) console output for ~15 minutes, unless running with `-v -v`.
3. `-v -v` reveals repeated 30-line trace blocks that each take ~30 seconds, repeated ~29 times.
## Additional Context
- About **half of the total remote size** is from `.tmp` files on GCS, which appear unused.
- Even ignoring those `.tmp` files, the repeated “stage collection” logs and the slow resolution process are unexpected.
- The current behavior makes it difficult to track the download progress for large data sets, and the repeated logs significantly increase overall runtime.
- Please find attached the dvc-output.txt, dvc_yaml.txt and params_yaml.txt
We can provide further logs or details if necessary. Thank you for investigating!
[dvc-output.txt](https://github.com/user-attachments/files/19233753/dvc-output.txt)
[dvc_yaml.txt](https://github.com/user-attachments/files/19233830/dvc_yaml.txt)
[params_yaml.txt](https://github.com/user-attachments/files/19233837/params_yaml.txt)
|
open
|
2025-03-13T17:31:04Z
|
2025-03-14T03:59:48Z
|
https://github.com/iterative/dvc/issues/10703
|
[
"triage"
] |
vishvadesai9
| 0
|
vanna-ai/vanna
|
data-visualization
| 813
|
Number of requested results 10 is greater than number of elements in index 0, updating n_results = 0
|
Number of requested results 10 is greater than number of elements in index 0, updating n_results = 0
一直出现上诉问题,而且ddl通过train加入之后,问了很多简单问题都回答不了,这是怎么回事呢?大佬知道问题出在哪里吗?我的表结构字段比较多,每张表有200个字段,共有6张表。 @pygeek @wgong @livenson @prady00 @gquental
|
open
|
2025-03-14T11:01:17Z
|
2025-03-15T03:00:55Z
|
https://github.com/vanna-ai/vanna/issues/813
|
[] |
wuxiaolianggit
| 2
|
PablocFonseca/streamlit-aggrid
|
streamlit
| 38
|
"Open in Streamlit" Examples do not seem to work
|
Hi there @PablocFonseca!
I am a big fan of this streamlit component and am currently learning how to use it as I build out an application for internal company use. It is a huge improvement over the standard dataframe rendering offered by streamlit. Thank you!
During my learning process, I am referring to your examples and I noticed that they do not seem to run for some reason. I am guessing it's a streamlit version issue or something. The following happens both on my own machine, when I run your examples directly with streamlit v0.89, and when I click the "Run in Streamlit" badge on your README.md:
When loading `example.py`, the page loads, I see it rendered for a split second, then the page goes blank but I still see the streamlit logo in the bottom right-hand corner:

Your `example.py` file runs a multi-page app that references other files like `main_example.py`, etc. Now, when I attempt to run some of these other pages directly, I can. e.g. `fixed_key_example.py`. However, I cannot do this to run `main_example.py`, the page goes blank again.
I am running the following:
* Windows 10
* Firefox 93.0 (same behaviour observed in Chrome)
Have been learning a lot from just reading the code in your examples but would love to be able to interact with them.
If you think you are able to fix this bug in your examples, it would be great for learning more about the subtleties of working with this component.
Many thanks!
|
closed
|
2021-10-12T23:55:59Z
|
2021-10-26T21:15:57Z
|
https://github.com/PablocFonseca/streamlit-aggrid/issues/38
|
[] |
connorferster
| 2
|
praw-dev/praw
|
api
| 1,521
|
difference between "hot" and "top"?
|
Hi! thanks for the app!
I have a question on the definition difference between "hot" and "top".
It seems top comes from the most upvote, but how does count hot items?
Best
Joseph Jin
|
closed
|
2020-06-05T07:15:51Z
|
2020-06-05T07:31:39Z
|
https://github.com/praw-dev/praw/issues/1521
|
[] |
dryjins
| 1
|
jina-ai/serve
|
fastapi
| 5,690
|
Introduce Job concepts to make Executors suitable for 1-time Request
|
**Describe the feature**
Provide this:
```python
job = Job(uses=Executor, retries=3, ...)
job.run(on='/foo', inputs='s3....', output='s3....')
job.to_kubernetes_yaml() # should map to a K8s BatchJob
```
|
closed
|
2023-02-14T16:10:36Z
|
2024-06-23T00:21:11Z
|
https://github.com/jina-ai/serve/issues/5690
|
[
"Stale"
] |
JoanFM
| 6
|
reiinakano/scikit-plot
|
scikit-learn
| 115
|
Regarding the scikit-plot.metrics.plot_roc function
|
In you code I noticed that if we pass classes in the form of their actual meaning instead of (0,1,2 .. ) and we pass it as (c,b,a) then np.unique(y_true) makes the classes in the form of its alphabetical format and this changes the position of the classes that the model was trained on
classes = np.unique(y_true)
`fpr_dict[i], tpr_dict[i], _ = roc_curve(y_true, probas[:, i],
pos_label=classes[i])`
Hence if you could add a parameter of class_labels in the function
`def plot_roc_multi(y_true, y_probas,class_labels, title='ROC Curves',
plot_micro=True, plot_macro=True, classes_to_plot=None,
ax=None, figsize=None, cmap='nipy_spectral',
title_fontsize="large", text_fontsize="medium"):`
where class_labels is in the form of an array [a,b,c] it would be much easier I think
|
open
|
2021-07-23T13:46:02Z
|
2022-11-04T15:12:46Z
|
https://github.com/reiinakano/scikit-plot/issues/115
|
[] |
Akshay1-6180
| 1
|
huggingface/datasets
|
deep-learning
| 7,116
|
datasets cannot handle nested json if features is given.
|
### Describe the bug
I have a json named temp.json.
```json
{"ref1": "ABC", "ref2": "DEF", "cuts":[{"cut1": 3, "cut2": 5}]}
```
I want to load it.
```python
ds = datasets.load_dataset('json', data_files="./temp.json", features=datasets.Features({
'ref1': datasets.Value('string'),
'ref2': datasets.Value('string'),
'cuts': datasets.Sequence({
"cut1": datasets.Value("uint16"),
"cut2": datasets.Value("uint16")
})
}))
```
The above code does not work. However, I can load it without giving features.
```python
ds = datasets.load_dataset('json', data_files="./temp.json")
```
Is it possible to load integers as uint16 to save some memory?
### Steps to reproduce the bug
As in the bug description.
### Expected behavior
The data are loaded and integers are uint16.
### Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.21.0
- Platform: Linux-5.15.0-118-generic-x86_64-with-glibc2.35
- Python version: 3.11.9
- `huggingface_hub` version: 0.24.5
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0
|
closed
|
2024-08-20T12:27:49Z
|
2024-09-03T10:18:23Z
|
https://github.com/huggingface/datasets/issues/7116
|
[] |
ljw20180420
| 3
|
cvat-ai/cvat
|
tensorflow
| 9,211
|
Migration of backend data to external storage services (e.g. AWS S3)
|
Hello everyone!
We are currently using CVAT `v2.7.4` in Kubernetes.
In this installation, the backend server pods use a single PV with `AccessMode: ReadWriteMany` for all these pods. This is a very specific type of volume in our infrastructure and we try not to use them unless absolutely necessary.
I would like to clarify two points for myself:
- is there a way to replace this PV with an external storage service (for example, aws S3)?
- if there is such a possibility, how can I migrate data from PV to conditional S3?
I would be very grateful for any recommendations.
|
closed
|
2025-03-14T11:20:54Z
|
2025-03-17T19:10:16Z
|
https://github.com/cvat-ai/cvat/issues/9211
|
[] |
obervinov
| 2
|
pydantic/FastUI
|
pydantic
| 82
|
class_name.py throws TypeError
|
Hi,
Interesting library! I am a newby and tried a hello-world app, with the default code. I run the code in a Docker container on Python 3.9. I get this TypeError:
File "/usr/local/lib/python3.9/site-packages/fastui/class_name.py", line 6, in <module>
ClassName = Annotated[str | list[str] | dict[str, bool | None] | None, Field(serialization_alias='className')]
TypeError: unsupported operand type(s) for |: 'type' and 'types.GenericAlias'
Any suggestions?
Kind regards, Harmen
|
closed
|
2023-12-08T07:12:22Z
|
2023-12-13T09:35:25Z
|
https://github.com/pydantic/FastUI/issues/82
|
[] |
jorritsma
| 4
|
kizniche/Mycodo
|
automation
| 451
|
Overheating ? Others experience ?
|
Version 5.7.2
More of a query of others experience to explain my results ?
We had our first warm day of the year today and as luck would have it I was out all day....and returned to find my pi stopped recording for the hottest part of the day
So my pi got fried, in the image below you can see that once the pi reached ~56°C (the read line) all readings stopped (blue and green), they then resumed when the pi got down to about 65°C (it got to a max of 72 C). I think I read somewhere the pi was ok up to ~70, so why did it stop reading at 56 and why did the pi carry on recording (this must mean that the pi was still running, and so was mycodo in order to record the data ? But no outputs or inputs were recorded. (The blue/green lines are temperature and stop/resume at value of around 25/24 C.

The only thing in the logs that starts at the same time as the measurements stop is:
> 2018-04-18 13:47:21,202 - mycodo.lcd_1 - ERROR - Count not initialize LCD. Error: 121
> 2018-04-18 13:47:21,209 - mycodo.lcd_1 - ERROR - IOError: Unable to output to LCD.
> 2018-04-18 13:47:36,530 - mycodo.lcd_1 - ERROR - Count not initialize LCD. Error: 121
> 2018-04-18 13:47:36,533 - mycodo.lcd_1 - ERROR - IOError: Unable to output to LCD.
> 2018-04-18 13:47:52,659 - mycodo.lcd_1 - ERROR - Count not initialize LCD. Error: 121
> 2018-04-18 13:47:52,663 - mycodo.lcd_1 - ERROR - IOError: Unable to output to LCD.
> 2018-04-18 13:48:07,915 - mycodo.lcd_1 - ERROR - Count not initialize LCD. Error: 121
> 2018-04-18 13:48:07,919 - mycodo.lcd_1 - ERROR - IOError: Unable to output to LCD.
(there's also lots of tsl2561 exceptions, but these started well before the measurements stopped.
|
closed
|
2018-04-18T23:48:17Z
|
2018-05-02T13:45:36Z
|
https://github.com/kizniche/Mycodo/issues/451
|
[] |
drgrumpy
| 4
|
aiortc/aiortc
|
asyncio
| 1,264
|
pull whep problem player = MediaPlayer("http://*****:1985/rtc/v1/whep/?app=live&stream=100")
|
player = MediaPlayer("http://***:1985/rtc/v1/whep/?app=live&stream=100")
video = VideoTransformTrack(relay.subscribe(player.video),transform="airec")
if video:
pc.addTrack(video)
pc.addTrack(player.audio)
error :
File "E:\ProgramData\miniconda3\envs\mygpt39\lib\site-packages\aiortc\contrib\media.py", line 305, in __init__
self.__container = av.open(
File "av\\container\\core.pyx", line 420, in av.container.core.open
File "av\\container\\core.pyx", line 266, in av.container.core.Container.__cinit__
File "av\\container\\core.pyx", line 286, in av.container.core.Container.err_check
File "av\\error.pyx", line 326, in av.error.err_check
av.error.EOFError: [Errno 541478725] End of file:
|
open
|
2025-03-02T01:56:54Z
|
2025-03-02T01:57:08Z
|
https://github.com/aiortc/aiortc/issues/1264
|
[] |
gg22mm
| 0
|
comfyanonymous/ComfyUI
|
pytorch
| 6,344
|
reactor workflow not working
|
### Expected Behavior
workflow to work
### Actual Behavior
not working
### Steps to Reproduce
just insert ReActor 🌌 Fast Face Swap nodi in worklow, connect what is needed and that is it
### Debug Logs
```powershell
got prompt
[ReActor] 16:05:31 - STATUS - Working: source face index [0], target face index [0]
[ReActor] 16:05:31 - STATUS - Using Hashed Source Face(s) Model...
[ReActor] 16:05:31 - STATUS - Using Hashed Target Face(s) Model...
!!! Exception during processing !!! cannot access local variable 'model_path' where it is not associated with a value
Traceback (most recent call last):
File "J:\ComfyUI\execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "J:\ComfyUI\execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "J:\ComfyUI\execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "J:\ComfyUI\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "J:\ComfyUI\custom_nodes\comfyui-reactor-node\nodes.py", line 353, in execute
script.process(
File "J:\ComfyUI\custom_nodes\comfyui-reactor-node\scripts\reactor_faceswap.py", line 109, in process
result = swap_face(
^^^^^^^^^^
File "J:\ComfyUI\custom_nodes\comfyui-reactor-node\scripts\reactor_swapper.py", line 336, in swap_face
face_swapper = getFaceSwapModel(model_path)
^^^^^^^^^^
UnboundLocalError: cannot access local variable 'model_path' where it is not associated with a value
Prompt executed in 0.09 seconds
```
### Other
_No response_
|
closed
|
2025-01-04T14:09:39Z
|
2025-02-07T00:04:17Z
|
https://github.com/comfyanonymous/ComfyUI/issues/6344
|
[
"Potential Bug",
"Custom Nodes Bug"
] |
dule1970
| 4
|
polyaxon/traceml
|
data-visualization
| 5
|
Add LICENSE to MANIFEST.in
|
Could you please add the license to MANIFEST.in so that it will be included in sdists and other packages? This came up during [packaging](https://github.com/conda-forge/staged-recipes/pull/1355/) of pandas-summary in [conda-forge](conda-forge.github.io).
|
closed
|
2016-08-25T18:49:52Z
|
2016-08-26T10:51:19Z
|
https://github.com/polyaxon/traceml/issues/5
|
[] |
proinsias
| 1
|
qwj/python-proxy
|
asyncio
| 114
|
Headers
|
Is possible to get headers of the request?
|
open
|
2021-02-23T08:34:19Z
|
2021-02-24T17:29:44Z
|
https://github.com/qwj/python-proxy/issues/114
|
[] |
4you2see
| 2
|
koxudaxi/fastapi-code-generator
|
pydantic
| 277
|
Special input parameter support
|
An error is reported when a function variable is a Python keyword. Like this:
```text
"parameters": [
{
"name": "from",
"in": "query",
"description": "起点坐标,39.071510,117.190091",
"required": true,
"schema": {
"type": "string"
}
},
...
```
Maybe you should generate something like this:
```
from_: str=Field(alias="from"),
```
|
open
|
2022-09-11T04:04:23Z
|
2022-09-11T04:04:23Z
|
https://github.com/koxudaxi/fastapi-code-generator/issues/277
|
[] |
Chise1
| 0
|
huggingface/pytorch-image-models
|
pytorch
| 1,231
|
[BUG] Convnext inference key error
|
Hi,
I use the train.py scripts to train the Convnext-B, it work well, but when I want to use the checkpoint pth to inference, the error occur, it says RuntimeError: Error(s) in loading state_dict for DPN:
Missing key(s) in state_dict: "features.conv1_1.conv.weight" ....
I don't know why it occur
|
closed
|
2022-04-24T01:55:28Z
|
2022-04-24T02:01:36Z
|
https://github.com/huggingface/pytorch-image-models/issues/1231
|
[
"bug"
] |
523997931
| 0
|
Yorko/mlcourse.ai
|
seaborn
| 382
|
Topic 3. Decision tree regressor, MSE
|
В примере по DecisionTreeRegressor неправильный расчет MSE в названии графика:
`plt.title("Decision tree regressor, MSE = %.2f" % np.sum((y_test - reg_tree_pred) ** 2))`
Нужно ещё поделить на количество наблюдений, предлагаю поправить так:
`plt.title("Decision tree regressor, MSE = %.4f" % (np.sum((y_test - reg_tree_pred) ** 2) / n_test))`
Файл:
https://github.com/Yorko/mlcourse.ai/blob/master/jupyter_english/topic03_decision_trees_kNN/topic3_decision_trees_kNN.ipynb
И в русской версии аналогично:
https://github.com/Yorko/mlcourse.ai/blob/master/jupyter_russian/topic03_decision_trees_knn/topic3_trees_knn.ipynb
|
closed
|
2018-10-17T08:08:32Z
|
2018-10-17T21:34:04Z
|
https://github.com/Yorko/mlcourse.ai/issues/382
|
[
"minor_fix"
] |
lalimpiev
| 1
|
Nekmo/amazon-dash
|
dash
| 156
|
confirmation with URL
|
Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)
### What is the purpose of your *issue*?
- [ ] Bug report (encountered problems with amazon-dash)
- [x] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
#### Description
with telegram and pushbullet, it can be usefull to have a generic confirmation like a http (post or other) to send message on a service (push, ifttt ...)
like that :
confirmations:
send-url:
url: 'http://domain.com/path/to/webhook'
method: post
content-type: json
body: '{"key": "yhythhbhbgbF1", "msg": "button pushed"}'
|
open
|
2020-05-11T15:46:18Z
|
2020-05-11T15:46:18Z
|
https://github.com/Nekmo/amazon-dash/issues/156
|
[] |
Fraborak
| 0
|
modelscope/data-juicer
|
data-visualization
| 128
|
去哪儿可以获取到en_core_web_md-3.5.0.zip模型
|
### Before Asking 在提问之前
- [X] I have read the [README](https://github.com/alibaba/data-juicer/blob/main/README.md) carefully. 我已经仔细阅读了 [README](https://github.com/alibaba/data-juicer/blob/main/README_ZH.md) 上的操作指引。
- [X] I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。
### Search before asking 先搜索,再提问
- [X] I have searched the Data-Juicer [issues](https://github.com/alibaba/data-juicer/issues) and found no similar questions. 我已经在 [issue列表](https://github.com/alibaba/data-juicer/issues) 中搜索但是没有发现类似的问题。
### Question
<img width="1194" alt="image" src="https://github.com/alibaba/data-juicer/assets/93858590/bce31b8e-b840-4f2b-ac49-e9bd4b0c69f1">
### Additional 额外信息
_No response_
|
closed
|
2023-12-11T05:33:21Z
|
2024-01-06T09:31:58Z
|
https://github.com/modelscope/data-juicer/issues/128
|
[
"question",
"stale-issue"
] |
ZengJin123
| 4
|
flairNLP/fundus
|
web-scraping
| 256
|
Rename `HTMLSource`'s `url_filter` parameter to `url_filters` and type hint accordingly
|
Type-hinting `self.url_filter` would help since it differs from the past `url_filter`. Also, we could allow `Optional[list[URLFilter]]` in the first place. Then `url_filters` instead of `url_filter` would be more fitting. This is not really related to async, so maybe for another PR?
_Originally posted by @dobbersc in https://github.com/flairNLP/fundus/pull/239#discussion_r1248043886_
|
closed
|
2023-07-03T12:45:23Z
|
2023-08-29T15:09:30Z
|
https://github.com/flairNLP/fundus/issues/256
|
[] |
MaxDall
| 0
|
mwaskom/seaborn
|
pandas
| 3,299
|
Swarmplots fail with numpy v1.22 for dataframes with repeating indices
|
With numpy v1.22, swarmplots fail when the index of the dataframe has repeating elements (works for 1.23+):
(I'm using seaborn 0.12.1 and matplotlib 3.7.1)
```
File "/[]/seaborn/categorical.py", line 346, in plot_swarms
dodge_move = offsets[sub_data["hue"].map(self._hue_map.levels.index)]
File "[]/pandas/core/series.py", line 981, in __getitem__
return self._get_value(key)
File "[]/pandas/core/series.py", line 1089, in _get_value
loc = self.index.get_loc(label)
File "/[]/pandas/core/indexes/base.py", line 3804, in get_loc
raise KeyError(key) from err
KeyError: 1
```
(I'm guessing it's not best practices to have repeating values in the index but it's probably not just me).
Here's a minimal example that gives the error:
```python
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
df = sns.load_dataset('exercise')
df.index = [1, *range(0,89)]
sns.swarmplot(data = df, x = 'diet', y = 'pulse', hue = 'kind',
dodge = True)
plt.show()
```
I think this an issue with how numpy is interpreting the Pandas series as an index in [this line](https://github.com/mwaskom/seaborn/blob/master/seaborn/categorical.py#L453); I tried casting `sub_data["hue"]...` to a list in this line and it seemed to solve the problem.
|
closed
|
2023-03-21T01:28:46Z
|
2023-03-21T12:55:26Z
|
https://github.com/mwaskom/seaborn/issues/3299
|
[] |
uri-t
| 1
|
fugue-project/fugue
|
pandas
| 292
|
[FEATURE] Implement a real DuckDB engine
|
Currently, DuckDB engine is just a NativeExecutionEngine + DuckDB SQL Engine. It's not optimal because the data will be trasferred between duckdb and pandas whenever there is a new SQL statement. So we need to have a real DuckDBExecutionEngine, in which the dataframe can be kept inside DuckDB in most of the steps. This should improve speed a lot for common operations.
|
closed
|
2022-01-17T04:02:22Z
|
2022-01-17T07:49:52Z
|
https://github.com/fugue-project/fugue/issues/292
|
[
"enhancement",
"duckdb"
] |
goodwanghan
| 0
|
babysor/MockingBird
|
deep-learning
| 684
|
调整完batch_size之后需要其他的操作吗?
|
调整了batch_size的大小但是占用率没有变化
|
closed
|
2022-07-27T19:49:41Z
|
2023-07-01T02:46:28Z
|
https://github.com/babysor/MockingBird/issues/684
|
[] |
yunqi777
| 1
|
autokey/autokey
|
automation
| 913
|
Missing wiki page - Contributed Scripts 3
|
### AutoKey is a Xorg application and will not function in a Wayland session. Do you use Xorg (X11) or Wayland?
Xorg
### Has this issue already been reported?
- [X] I have searched through the existing issues.
### Is this a question rather than an issue?
- [X] This is not a question.
### What type of issue is this?
Documentation
### Choose one or more terms that describe this issue:
- [ ] autokey triggers
- [ ] autokey-gtk
- [ ] autokey-qt
- [ ] beta
- [ ] bug
- [ ] critical
- [ ] development
- [X] documentation
- [ ] enhancement
- [ ] installation/configuration
- [ ] phrase expansion
- [ ] scripting
- [ ] technical debt
- [ ] user interface
### Other terms that describe this issue if not provided above:
_No response_
### Which Linux distribution did you use?
_No response_
### Which AutoKey GUI did you use?
None
### Which AutoKey version did you use?
_No response_
### How did you install AutoKey?
_No response_
### Can you briefly describe the issue?
I was going to add a clipboard script to the [Contributed Scripts 3](https://github.com/autokey/autokey/wiki/Contributed-Scripts-3) page in the wiki, but the page is missing. You can see that there are supposed to be four scripts in it if you look on the [Contents of contributed script files](https://github.com/autokey/autokey/wiki/Contents-of-contributed-script-files) page.
A look at the git log of the wiki repository shows that the last time that file was touched was in **commit 63425ce06484d44ba9937c287d6bab6240fcf986** on July 10, 2023 when @kreezxil updated the page.
I'm not a savvy enough git user to be able to figure out how to see the deletion, let alone how to get it back. Any help would be appreciated.
### Can the issue be reproduced?
Always
### What are the steps to reproduce the issue?
1. Try to visit the [Contributed Scripts 3](https://github.com/autokey/autokey/wiki/Contributed-Scripts-3) page in the wiki in your browser.
### What should have happened?
The page should open.
### What actually happened?
_No response_
### Do you have screenshots?
The page doesn't exist.
### Can you provide the output of the AutoKey command?
_No response_
### Anything else?
_No response_
|
closed
|
2023-08-15T19:05:41Z
|
2023-08-17T18:50:48Z
|
https://github.com/autokey/autokey/issues/913
|
[] |
Elliria
| 6
|
xuebinqin/U-2-Net
|
computer-vision
| 311
|
GPU memory increases during training
|
I am using the [refactored U-2-Net](https://github.com/xuebinqin/U-2-Net/blob/master/model/u2net_refactor.py) and the GPU memory increases every few iterations during training. I noticed in the code that there are function definitions in the forward functions such as [here](https://github.com/xuebinqin/U-2-Net/blob/ebb340e24c5645cd75af6c255c8ce3b5eefe074f/model/u2net_refactor.py#L48) and [here](https://github.com/xuebinqin/U-2-Net/blob/ebb340e24c5645cd75af6c255c8ce3b5eefe074f/model/u2net_refactor.py#L90). Would they be the possible causes for the increased memory?
EDIT: Apparently the setting `torch.backends.cudnn.benchmark = True` is causing the memory to increase during training. Turning it off gives more memory at the first few iterations but the memory still increases as time goes by.
|
open
|
2022-06-15T07:02:26Z
|
2022-06-20T05:36:19Z
|
https://github.com/xuebinqin/U-2-Net/issues/311
|
[] |
kenmbkr
| 1
|
ultrafunkamsterdam/undetected-chromedriver
|
automation
| 1,288
|
driver.find_element(By.XPATH, '//*[contains( text(), "Sign In")]').click() triggers antibot but if clicking manually it works
|
Apologies if I missed something here.
When trying to scrape a site, if i use the 'driver.find_element(By.XPATH, '//*[contains( text(), "Sign In")]').click()' command, I get asked to prove I am human, but if i click the website in manually i can browse through it without any ab verification.
Can't share the site as its work related but I wondered if it was anything obvious that someone may have run into before?
Is is due to ec being a selenium function?
|
open
|
2023-05-24T06:56:00Z
|
2023-08-12T00:31:42Z
|
https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1288
|
[] |
MGSolu
| 2
|
AUTOMATIC1111/stable-diffusion-webui
|
pytorch
| 15,348
|
[Bug]: SDXL Lowram flag is broken (Probably)
|
### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
SDXL in combination with `--lowram` flag and lora causes abnormal RAM usage.
### Steps to reproduce the problem
1. Add the `--lowram` in the startup script
2. Run WebUI and load SDXL checkpoint
3. Add any SDXL lora to the prompt
4. Observe abnormal filling of the RAM
### What should have happened?
Memory management should work better, WebUI should leave RAM as it is and work only with VRAM if the `--lowram` flag is enabled
### What browsers do you use to access the UI ?
Mozilla Firefox, Google Chrome, Android
### Sysinfo
[sysinfo-2024-03-21-19-51.json](https://github.com/AUTOMATIC1111/stable-diffusion-webui/files/14706297/sysinfo-2024-03-21-19-51.json)
### Console logs
```Shell
Console logs are irrelevant in this situation because they don't show anything outside of normal behavior.
```
### Additional information
Hello.
Recently I tested the performance of the webui with extreme cases of memory shortage. The test that caught my attention was done with the minimum amount of RAM (8GB). More specifically, SDXL in combination with the "--lowram" flag and loaded LoRA causes abnormal filling of RAM.
Normally, after the checkpoint is loaded and moved from RAM to VRAM, the amount of RAM doesn't change during the generation process.
(although this is strange behavior considering the enabled fast safetensors flag, which should force the webui to load checkpoint directly into VRAM)
This abnormal RAM behavior started to become noticeable somewhere between 57727e554d8f87b4cf438390d6cb05a27d1734f5 and c1713bfeac461bc28158b66ef8d956a39e296b94.
At this time I cannot provide you with a more precise range of suspected commits, I will try to find them later. However, I should note that this problem is also occurring in the lllyasviel/stable-diffusion-webui-forge fork. I hope you'll find this information useful in tracking down the exact change.
Screenshots:
Version 57727e554d8f87b4cf438390d6cb05a27d1734f5 after checkpoint is loaded

Version 57727e554d8f87b4cf438390d6cb05a27d1734f5 after generating an image with lora

Version 57727e554d8f87b4cf438390d6cb05a27d1734f5 after restart and generating an image without lora

Version c1713bfeac461bc28158b66ef8d956a39e296b94 after generation two images with lora

|
open
|
2024-03-21T20:10:10Z
|
2024-03-22T19:40:27Z
|
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15348
|
[
"bug-report"
] |
Les-Tin
| 1
|
pytest-dev/pytest-qt
|
pytest
| 158
|
waitForWindowShown doesn't handle timeout
|
Qt's [`qWaitForWindowShown`](https://doc.qt.io/qt-5/qtest-obsolete.html#qWaitForWindowShown) (or [`qWaitForWindowExposed`](https://doc.qt.io/qt-5/qtest.html#qWaitForWindowExposed) which supersedes it since Qt 5.0) has a `timeout` argument, and returns a `bool` saying if the waiting has timed out or not.
pytest-qt's wrapper over it doesn't have the `timeout` argument, discards the return value and doesn't raise an exception either in case there was a timeout.
|
closed
|
2016-09-15T12:17:51Z
|
2016-10-19T00:07:59Z
|
https://github.com/pytest-dev/pytest-qt/issues/158
|
[] |
The-Compiler
| 2
|
python-gino/gino
|
sqlalchemy
| 387
|
Binding postgres url with sslcert and sslmode does not work
|
* GINO version: 0.800
* Python version: 3.6.5
* asyncpg version: 0.18.1
* aiocontextvars version: 2.3
* PostgreSQL version: 10.5
### Description
`async def main():
await db.set_bind(f'postgres://admin:pwd@amazingco.aivencloud.com:18629/prod?sslmode=verify&sslcert={ctx}')`
Keeps throwing this error.
`asyncpg.exceptions.InvalidAuthorizationSpecificationError: no pg_hba.conf entry for host " ", user "admin", database "prod", SSL off`
|
closed
|
2018-11-09T12:51:54Z
|
2018-11-17T10:50:52Z
|
https://github.com/python-gino/gino/issues/387
|
[
"question"
] |
willyhakim
| 4
|
falconry/falcon
|
api
| 1,721
|
Remove browser warning in unset_cookie
|
Currently unset cookie returns this warning on firefox when used in http:
`Cookie “<name-here>” will be soon rejected because it has the “sameSite” attribute set to “none” or an invalid value, without the “secure” attribute. To know more about the “sameSite“ attribute, read https://developer.mozilla.org/docs/Web/HTTP/Headers/Set-Cookie/SameSite`
This particular cookie was set with `secure=False, http_only=True, same_site="Strict"` but I don't think how how it was previously set matters.
From the [MDN documentation](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie/SameSite#Fixing_common_warnings) there seem to be two options to remove this warning:
- Set`same_site` to `Strict` or `Lax`
- Set `secure` to `True`
Setting `same_site` seems preferable since secure can be set only in https for some cookie prefixes: https: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie#Cookie_prefixes
Since the cookie is removed after the response, setting `same_site` to `Lax` should probably be ok.
|
closed
|
2020-05-12T16:26:27Z
|
2020-06-10T15:20:12Z
|
https://github.com/falconry/falcon/issues/1721
|
[
"bug"
] |
CaselIT
| 5
|
aeon-toolkit/aeon
|
scikit-learn
| 1,919
|
[DOC] Outdated Classification Notebook
|
### Describe the issue linked to the documentation
I was reading the classification notebook to learn more about classifiers, there were some confusing parts so I reached out on the slack for some help. It seems though as the classification notebook guides are completely outdated.
Notebook in discussion: https://www.aeon-toolkit.org/en/stable/examples/classification/classification.html
### Suggest a potential alternative/fix
I was told that: "Old BaseTransformers are soon to be removed. We now have BaseCollectionTransformers, which assume shape (n_cases, n_channels, n_timepoints) and BaseSeriesTransformers that default to (n_channels, n_timepoints) but can be set with an axis parameter to take (n_timepoints,n_channels)."
It is worth going through the notebook and re-write some sections of it (not fully rewriting the whole notebook.)
|
closed
|
2024-08-07T11:05:26Z
|
2024-11-28T11:17:24Z
|
https://github.com/aeon-toolkit/aeon/issues/1919
|
[
"documentation"
] |
Moonzyyy
| 1
|
computationalmodelling/nbval
|
pytest
| 21
|
Smarter picking of kernel to start
|
Currently, nbval always starts the default Python kernel, though this is not necessarily running in the same Python environment as nbval itself (see discussion on #6).
1. In the default case, we should probably pick the kernel to start based on notebook metadata, like nbconvert does for `--execute`. This would allow validating notebooks in other languages.
2. We may also want an option which forces it to start a Python kernel in the same Python environment as nbconvert itself, so we know that test dependencies affect what's available in the kernel. Thoughts on what this should be called?
|
closed
|
2016-12-21T22:48:07Z
|
2017-01-17T10:13:14Z
|
https://github.com/computationalmodelling/nbval/issues/21
|
[
"enhancement"
] |
takluyver
| 1
|
proplot-dev/proplot
|
matplotlib
| 21
|
Non-mercator ticks available in cartopy
|
FYI @lukelbd , looks like tick labeling is available for non-mercator projections now. https://github.com/SciTools/cartopy/pull/1117 was just merged. It's not in the conda/pip release yet, though.
|
closed
|
2019-06-19T17:39:34Z
|
2020-05-11T10:18:11Z
|
https://github.com/proplot-dev/proplot/issues/21
|
[
"feature",
"high priority"
] |
bradyrx
| 2
|
mwaskom/seaborn
|
data-visualization
| 3,003
|
Documentation archive dropup only works on homepage
|
Needs to be changed to use absolute paths.
|
closed
|
2022-09-06T22:25:00Z
|
2022-09-09T01:50:34Z
|
https://github.com/mwaskom/seaborn/issues/3003
|
[
"docs"
] |
mwaskom
| 1
|
matterport/Mask_RCNN
|
tensorflow
| 2,816
|
Unable to open file (truncated file: eof = 28999680, sblock->base_addr = 0, stored_eof = 257557808)
|
open
|
2022-04-23T09:42:02Z
|
2022-04-23T09:42:02Z
|
https://github.com/matterport/Mask_RCNN/issues/2816
|
[] |
remembersu
| 0
|
|
jupyter/nbgrader
|
jupyter
| 1,033
|
Manual grading after interrupted kernel
|
### Operating system
Ubuntu 18.04 LTS
### `nbgrader --version`
Python version 3.6.6 | packaged by conda-forge | (default, Jul 26 2018, 09:53:17)
[GCC 4.8.2 20140120 (Red Hat 4.8.2-15)]
nbgrader version 0.5.4
### `jupyterhub --version` (if used with JupyterHub)
0.9.4
### `jupyter notebook --version`
5.7.0
### Expected behavior
After executing `nbgrader autograde`, and exceeding the `ExecutePreprocessor.timeout` value (default:30), the kernel is interrupted.
Now, a manual grading should be possible.
### Actual behavior
The Formgrader extension displays `needs autograding`.
```
$ cd "/home/nbgadmin"
$ nbgrader autograde "Time3"
[AutogradeApp | INFO] Copying /home/nbgadmin/submitted/nbgadmin/Time3/timestamp.txt -> /home/nbgadmin/autograded/nbgadmin/Time3/timestamp.txt
[AutogradeApp | INFO] SubmittedAssignment<Time3 for nbgadmin> submitted at 2018-10-20 21:33:17.469330
[AutogradeApp | INFO] Overwriting files with master versions from the source directory
[AutogradeApp | INFO] Sanitizing /home/nbgadmin/submitted/nbgadmin/Time3/TestTime.ipynb
[AutogradeApp | INFO] Converting notebook /home/nbgadmin/submitted/nbgadmin/Time3/TestTime.ipynb
[AutogradeApp | INFO] Writing 2994 bytes to /home/nbgadmin/autograded/nbgadmin/Time3/TestTime.ipynb
[AutogradeApp | INFO] Autograding /home/nbgadmin/autograded/nbgadmin/Time3/TestTime.ipynb
[AutogradeApp | INFO] Converting notebook /home/nbgadmin/autograded/nbgadmin/Time3/TestTime.ipynb
[AutogradeApp | INFO] Executing notebook with kernel: java
Oct 20, 2018 9:52:07 PM io.github.spencerpark.jupyter.channels.Loop start
INFO: Loop starting...
Oct 20, 2018 9:52:07 PM io.github.spencerpark.jupyter.channels.Loop start
INFO: Loop started.
Oct 20, 2018 9:52:07 PM io.github.spencerpark.jupyter.channels.Loop start
INFO: Loop starting...
Oct 20, 2018 9:52:07 PM io.github.spencerpark.jupyter.channels.Loop start
INFO: Loop started.
Oct 20, 2018 9:52:07 PM io.github.spencerpark.jupyter.channels.Loop start
INFO: Loop starting...
Oct 20, 2018 9:52:07 PM io.github.spencerpark.jupyter.channels.Loop start
INFO: Loop started.
[AutogradeApp | ERROR] Timeout waiting for execute reply (30s).
[AutogradeApp | ERROR] Interrupting kernel
Oct 20, 2018 9:52:39 PM io.github.spencerpark.jupyter.channels.ShellChannel lambda$bind$16
SEVERE: Unhandled message: none
[AutogradeApp | WARNING] Timeout waiting for IOPub output
Oct 20, 2018 9:52:44 PM io.github.spencerpark.jupyter.channels.Loop shutdown
INFO: Loop shutdown.
Oct 20, 2018 9:52:44 PM io.github.spencerpark.jupyter.channels.Loop shutdown
INFO: Loop shutdown.
Oct 20, 2018 9:52:44 PM io.github.spencerpark.jupyter.channels.Loop shutdown
INFO: Loop shutdown.
Oct 20, 2018 9:52:44 PM io.github.spencerpark.jupyter.channels.Loop run
INFO: Running loop shutdown callback.
Oct 20, 2018 9:52:44 PM io.github.spencerpark.jupyter.channels.Loop run
INFO: Loop stopped.
Oct 20, 2018 9:52:44 PM io.github.spencerpark.jupyter.channels.Loop run
INFO: Running loop shutdown callback.
Oct 20, 2018 9:52:44 PM io.github.spencerpark.jupyter.channels.Loop run
INFO: Loop stopped.
[AutogradeApp | ERROR] While processing assignment /home/nbgadmin/submitted/nbgadmin/Time3, the kernel became unresponsiveand we could not interrupt it. This probably means that the students' code has an infinite loop that consumes a lot of memory or something similar. nbgrader doesn't know how to deal with this problem, so you will have to manually edit the students' code (for example, to just throw an error rather than enter an infinite loop).
[AutogradeApp | WARNING] Removing failed assignment: /home/nbgadmin/autograded/nbgadmin/Time3
[AutogradeApp | ERROR] There was an error processing assignment 'Time3' for student 'nbgadmin'
[AutogradeApp | ERROR] Please see the error log (.nbgrader.log) for details on the specific errors on the above failures.
```
### Steps to reproduce the behavior
Use https://github.com/SpencerPark/IJava
Execute
```java
boolean doSomething(int timeoutSeconds) {
long time = System.currentTimeMillis();
long end = time + (timeoutSeconds * 1000);
while(System.currentTimeMillis() < end) {
time = System.currentTimeMillis();
}
return true;
}
doSomething(30+10)
```
|
open
|
2018-10-20T22:07:35Z
|
2022-06-23T10:21:12Z
|
https://github.com/jupyter/nbgrader/issues/1033
|
[
"enhancement"
] |
adibaba
| 3
|
ibis-project/ibis
|
pandas
| 10,022
|
bug: BigQuery Create table with partition_by argument is not working
|
### What happened?
I am trying to use the BigQuery backend to create a table with a partition by expression, ibis is raising exception.
here is a simple code to reproduce the error:
```
import ibis
c = ibis.bigquery.connect(...)
schema = ibis.schema([ ("c1", "date"), ("c2", "int")])
c.create_table('test_tb', partition_by='DATE_TRUNC(c1, MONTH)') # -> this line is throwing exception
```
### What version of ibis are you using?
9.4.0
### What backend(s) are you using, if any?
BigQuery
### Relevant log output
```sh
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/ruiyang/Go/src/github.com/ascend-io/ascend-core/.venv/lib/python3.11/site-packages/ibis/backends/bigquery/__init__.py", line 1007, in create_table
raise com.IbisError("One of the `schema` or `obj` parameter is required")
ibis.common.exceptions.IbisError: One of the `schema` or `obj` parameter is required
>>> c.create_table('test_tb', partition_by='DATE_TRUNC(c1, MONTH)', schema=schema)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/ruiyang/Go/src/github.com/ascend-io/ascend-core/.venv/lib/python3.11/site-packages/ibis/backends/bigquery/__init__.py", line 1099, in create_table
self.raw_sql(sql)
File "/Users/ruiyang/Go/src/github.com/ascend-io/ascend-core/.venv/lib/python3.11/site-packages/ibis/backends/bigquery/__init__.py", line 710, in raw_sql
return self.client.query_and_wait(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ruiyang/Go/src/github.com/ascend-io/ascend-core/.venv/lib/python3.11/site-packages/google/cloud/bigquery/client.py", line 3601, in query_and_wait
return _job_helpers.query_and_wait(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ruiyang/Go/src/github.com/ascend-io/ascend-core/.venv/lib/python3.11/site-packages/google/cloud/bigquery/_job_helpers.py", line 414, in query_and_wait
return _wait_or_cancel(
^^^^^^^^^^^^^^^^
File "/Users/ruiyang/Go/src/github.com/ascend-io/ascend-core/.venv/lib/python3.11/site-packages/google/cloud/bigquery/_job_helpers.py", line 562, in _wait_or_cancel
return job.result(
^^^^^^^^^^^
File "/Users/ruiyang/Go/src/github.com/ascend-io/ascend-core/.venv/lib/python3.11/site-packages/google/cloud/bigquery/job/query.py", line 1676, in result
while not is_job_done():
^^^^^^^^^^^^^
File "/Users/ruiyang/Go/src/github.com/ascend-io/ascend-core/.venv/lib/python3.11/site-packages/google/api_core/retry/retry_unary.py", line 293, in retry_wrapped_func
return retry_target(
^^^^^^^^^^^^^
File "/Users/ruiyang/Go/src/github.com/ascend-io/ascend-core/.venv/lib/python3.11/site-packages/google/api_core/retry/retry_unary.py", line 153, in retry_target
_retry_error_helper(
File "/Users/ruiyang/Go/src/github.com/ascend-io/ascend-core/.venv/lib/python3.11/site-packages/google/api_core/retry/retry_base.py", line 212, in _retry_error_helper
raise final_exc from source_exc
File "/Users/ruiyang/Go/src/github.com/ascend-io/ascend-core/.venv/lib/python3.11/site-packages/google/api_core/retry/retry_unary.py", line 144, in retry_target
result = target()
^^^^^^^^
File "/Users/ruiyang/Go/src/github.com/ascend-io/ascend-core/.venv/lib/python3.11/site-packages/google/cloud/bigquery/job/query.py", line 1625, in is_job_done
raise job_failed_exception
google.api_core.exceptions.BadRequest: 400 Unrecognized name: D at [1:111]; reason: invalidQuery, location: query, message: Unrecognized name: D at [1:111]
```
```
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
|
open
|
2024-09-04T22:59:38Z
|
2024-10-23T15:26:05Z
|
https://github.com/ibis-project/ibis/issues/10022
|
[
"bug",
"bigquery"
] |
ruiyang2015
| 7
|
jonaswinkler/paperless-ng
|
django
| 412
|
Use existing folder structure
|
I am interested in using paperless-ng, however I already have a existing folder structure in place where I organize everything.
Additionally to documents I keep other correlating data there so it is not easily possible to get rid of the existing structure.
I am also hesitant to dump everything into paperless-ng because getting all my documents out of it seems to be messy.
Would it be possible to specify a (read only) folder that just gets indexed e.g. every day or so?
|
closed
|
2021-01-23T07:40:39Z
|
2021-01-24T06:07:29Z
|
https://github.com/jonaswinkler/paperless-ng/issues/412
|
[] |
spacemanspiff2007
| 4
|
plotly/dash
|
plotly
| 2,641
|
[BUG] Reset View Button in Choropleth Mapbox Shows Old Data
|
Hello,
I've built a dash app (https://www.greenspace.city), which uses the Plotly Choropleth Mapbox to visualise spatial data. The map is centered on the selected suburb (dropdown value).
Unfortunately, when I switch the data (i.e. select a different suburb or city from the dropdown) and hit the reset view button (on the Choropleth Mapbox), it takes the user back to the original map center.
How do I stop the Choropleth Mapbox from caching the old data?
```
dash 2.12.1
plotly 5.16.1
```
|
closed
|
2023-09-12T07:31:44Z
|
2024-07-25T13:42:09Z
|
https://github.com/plotly/dash/issues/2641
|
[] |
masands
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.