repo_name
stringlengths
9
75
topic
stringclasses
30 values
issue_number
int64
1
203k
title
stringlengths
1
976
body
stringlengths
0
254k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
url
stringlengths
38
105
labels
listlengths
0
9
user_login
stringlengths
1
39
comments_count
int64
0
452
gradio-app/gradio
python
9,942
Component refresh log flooding issue
### Describe the bug When I use gr.Textbox in gradio, parameters configured every=1, console keeps refreshing logs, I don't want this. This will cause the logs to be flooded and no other useful messages to be seen. I just want to configure the component with every=1 not to refresh the INFO log, but I couldn't find any solutions, could you help me to solve it? ![image](https://github.com/user-attachments/assets/b31d95f7-7ce7-4d6d-8f7a-543a6eff519b) ### Have you searched existing issues? 🔎 - [X] I have searched and found no existing issues ### Reproduction ```python import gradio as gr with gr.Row: with gr.Column(): server_output = gr.Textbox( label="", lines=20, value=func, show_copy_button=True, elem_classes="log-container", every=1, # Refresh every 1 second autoscroll=True ) ``` ### Screenshot _No response_ ### Logs _No response_ ### System Info ```shell Yes, I have used the update command to update Gradio ``` ### Severity I can work around it
open
2024-11-12T07:30:38Z
2024-11-14T22:27:44Z
https://github.com/gradio-app/gradio/issues/9942
[ "bug" ]
diya-he
3
onnx/onnx
deep-learning
5,783
[Resize Spec] roi is not used to compute the output shape
# Bug Report ### Describe the bug In the spec of [resize](https://onnx.ai/onnx/operators/onnx__Resize.html#resize-19), Each dimension value of the output tensor is: `output_dimension = floor(input_dimension * (roi_end - roi_start) * scale)` While actual [shape inference code](https://github.com/onnx/onnx/blob/5be7f3164ba0b2c323813264ceb0ae7e929d2350/onnx/defs/tensor/utils.cc#L336C18-L336C18) doesn't reference the ROI: `input_shape.dim(i).dim_value()) * scales_data[i])` As well as EPs in ONNXRuntime, looks none is consuming roi to compute output shape.
closed
2023-11-30T21:12:37Z
2025-01-29T06:43:46Z
https://github.com/onnx/onnx/issues/5783
[ "bug", "module: shape inference", "stale" ]
zhangxiang1993
1
aleju/imgaug
deep-learning
48
OSError: [Errno 2] No such file or directory
I run "sudo -H python -m pip install imgaug-0.2.4.tar.gz" command to user my pip in Cellar to install imgaug, and got the following error. And i tried "pip install imgaug-0.2.4.tar.gz", and it seems that it uses 2.7.10 version python in osx system. However, my opencv is installed in 2.7.11 python in Cellar folder. How can i solve this problem? ---- Processing ./imgaug-0.2.4.tar.gz Error [Errno 2] No such file or directory while executing command python setup.py egg_info Exception: Traceback (most recent call last): File "/usr/local/lib/python2.7/site-packages/pip/basecommand.py", line 215, in main status = self.run(options, args) File "/usr/local/lib/python2.7/site-packages/pip/commands/install.py", line 335, in run wb.build(autobuilding=True) File "/usr/local/lib/python2.7/site-packages/pip/wheel.py", line 749, in build self.requirement_set.prepare_files(self.finder) File "/usr/local/lib/python2.7/site-packages/pip/req/req_set.py", line 380, in prepare_files ignore_dependencies=self.ignore_dependencies)) File "/usr/local/lib/python2.7/site-packages/pip/req/req_set.py", line 634, in _prepare_file abstract_dist.prep_for_dist() File "/usr/local/lib/python2.7/site-packages/pip/req/req_set.py", line 129, in prep_for_dist self.req_to_install.run_egg_info() File "/usr/local/lib/python2.7/site-packages/pip/req/req_install.py", line 439, in run_egg_info command_desc='python setup.py egg_info') File "/usr/local/lib/python2.7/site-packages/pip/utils/__init__.py", line 667, in call_subprocess cwd=cwd, env=env) File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 710, in __init__ errread, errwrite) File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 1335, in _execute_child raise child_exception OSError: [Errno 2] No such file or directory
open
2017-07-24T04:01:03Z
2017-07-24T16:44:17Z
https://github.com/aleju/imgaug/issues/48
[]
universewill
1
iusztinpaul/energy-forecasting
streamlit
22
[TypeError] Error while running Feature-pipeline with Airflow
Hi guys, I'm getting the error while running the feature-pipeline on Airflow, I've followed the instructions as https://github.com/iusztinpaul/energy-forecasting#run and stuck here, the logs was look like this: [2023-10-26, 21:48:13 +07] {process_utils.py:182} INFO - Executing cmd: /tmp/venvyb53_u_a/bin/python /tmp/venvyb53_u_a/script.py /tmp/venvyb53_u_a/script.in /tmp/venvyb53_u_a/script.out /tmp/venvyb53_u_a/string_args.txt /tmp/venvyb53_u_a/termination.log [2023-10-26, 21:48:13 +07] {process_utils.py:186} INFO - Output: [2023-10-26, 21:48:18 +07] {process_utils.py:190} INFO - INFO:__main__:export_end_datetime = 2023-10-26 14:46:59 [2023-10-26, 21:48:18 +07] {process_utils.py:190} INFO - INFO:__main__:days_delay = 15 [2023-10-26, 21:48:18 +07] {process_utils.py:190} INFO - INFO:__main__:days_export = 30 [2023-10-26, 21:48:18 +07] {process_utils.py:190} INFO - INFO:__main__:url = https://drive.google.com/uc?export=download&id=1y48YeDymLurOTUO-GeFOUXVNc9MCApG5 [2023-10-26, 21:48:18 +07] {process_utils.py:190} INFO - INFO:__main__:feature_group_version = 1 [2023-10-26, 21:48:18 +07] {process_utils.py:190} INFO - INFO:feature_pipeline.pipeline:Extracting data from API. [2023-10-26, 21:48:18 +07] {process_utils.py:190} INFO - WARNING:feature_pipeline.etl.extract:We clapped 'export_end_reference_datetime' to 'datetime(2023, 6, 30) + datetime.timedelta(days=days_delay)' as the dataset will not be updated starting from July 2023. The dataset will expire during 2023. Check out the following link for more information: https://www.energidataservice.dk/tso-electricity/ConsumptionDE35Hour [2023-10-26, 21:48:18 +07] {process_utils.py:190} INFO - INFO:feature_pipeline.etl.extract:Data already downloaded at: /opt/***/dags/output/data/ConsumptionDE35Hour.csv [2023-10-26, 21:48:19 +07] {process_utils.py:190} INFO - INFO:feature_pipeline.pipeline:Successfully extracted data from API. [2023-10-26, 21:48:19 +07] {process_utils.py:190} INFO - INFO:feature_pipeline.pipeline:Transforming data. [2023-10-26, 21:48:19 +07] {process_utils.py:190} INFO - INFO:feature_pipeline.pipeline:Successfully transformed data. [2023-10-26, 21:48:19 +07] {process_utils.py:190} INFO - INFO:feature_pipeline.pipeline:Building validation expectation suite. [2023-10-26, 21:48:19 +07] {process_utils.py:190} INFO - INFO:feature_pipeline.pipeline:Successfully built validation expectation suite. [2023-10-26, 21:48:19 +07] {process_utils.py:190} INFO - INFO:feature_pipeline.pipeline:Validating data and loading it to the feature store. [2023-10-26, 21:48:19 +07] {process_utils.py:190} INFO - Connected. Call `.close()` to terminate connection gracefully. [2023-10-26, 21:48:21 +07] {process_utils.py:190} INFO - [2023-10-26, 21:48:21 +07] {process_utils.py:190} INFO - [2023-10-26, 21:48:21 +07] {process_utils.py:190} INFO - UserWarning: The installed hopsworks client version 3.2.0 may not be compatible with the connected Hopsworks backend version 3.4.1. [2023-10-26, 21:48:21 +07] {process_utils.py:190} INFO - To ensure compatibility please install the latest bug fix release matching the minor version of your backend (3.4) by running 'pip install hopsworks==3.4.*' [2023-10-26, 21:48:23 +07] {process_utils.py:190} INFO - [2023-10-26, 21:48:23 +07] {process_utils.py:190} INFO - Logged in to project, explore it here https://c.app.hopsworks.ai:443/p/140438 [2023-10-26, 21:48:23 +07] {process_utils.py:190} INFO - Connected. Call `.close()` to terminate connection gracefully. [2023-10-26, 21:48:23 +07] {process_utils.py:190} INFO - Traceback (most recent call last): [2023-10-26, 21:48:23 +07] {process_utils.py:190} INFO - File "/tmp/venvyb53_u_a/script.py", line 90, in <module> [2023-10-26, 21:48:23 +07] {process_utils.py:190} INFO - res = run_feature_pipeline(*arg_dict["args"], **arg_dict["kwargs"]) [2023-10-26, 21:48:23 +07] {process_utils.py:190} INFO - File "/tmp/venvyb53_u_a/script.py", line 81, in run_feature_pipeline [2023-10-26, 21:48:23 +07] {process_utils.py:190} INFO - return pipeline.run( [2023-10-26, 21:48:23 +07] {process_utils.py:190} INFO - File "/tmp/venvyb53_u_a/lib/python3.9/site-packages/feature_pipeline/pipeline.py", line 61, in run [2023-10-26, 21:48:23 +07] {process_utils.py:190} INFO - load.to_feature_store( [2023-10-26, 21:48:23 +07] {process_utils.py:190} INFO - File "/tmp/venvyb53_u_a/lib/python3.9/site-packages/feature_pipeline/etl/load.py", line 24, in to_feature_store [2023-10-26, 21:48:23 +07] {process_utils.py:190} INFO - feature_store = project.get_feature_store() [2023-10-26, 21:48:23 +07] {process_utils.py:190} INFO - File "/tmp/venvyb53_u_a/lib/python3.9/site-packages/hopsworks/project.py", line 111, in get_feature_store [2023-10-26, 21:48:23 +07] {process_utils.py:190} INFO - return connection( [2023-10-26, 21:48:23 +07] {process_utils.py:190} INFO - File "/tmp/venvyb53_u_a/lib/python3.9/site-packages/hsfs/decorators.py", line 35, in if_connected [2023-10-26, 21:48:23 +07] {process_utils.py:190} INFO - return fn(inst, *args, **kwargs) [2023-10-26, 21:48:23 +07] {process_utils.py:190} INFO - File "/tmp/venvyb53_u_a/lib/python3.9/site-packages/hsfs/connection.py", line 178, in get_feature_store [2023-10-26, 21:48:23 +07] {process_utils.py:190} INFO - return self._feature_store_api.get(util.rewrite_feature_store_name(name)) [2023-10-26, 21:48:23 +07] {process_utils.py:190} INFO - File "/tmp/venvyb53_u_a/lib/python3.9/site-packages/hsfs/core/feature_store_api.py", line 35, in get [2023-10-26, 21:48:23 +07] {process_utils.py:190} INFO - return FeatureStore.from_response_json( [2023-10-26, 21:48:23 +07] {process_utils.py:190} INFO - File "/tmp/venvyb53_u_a/lib/python3.9/site-packages/hsfs/feature_store.py", line 109, in from_response_json [2023-10-26, 21:48:23 +07] {process_utils.py:190} INFO - return cls(**json_decamelized) [2023-10-26, 21:48:23 +07] {process_utils.py:190} INFO - TypeError: __init__() missing 3 required positional arguments: 'hdfs_store_path', 'featurestore_description', and 'inode_id' [2023-10-26, 21:48:26 +07] {taskinstance.py:1937} ERROR - Task failed with exception Traceback (most recent call last): File "/home/airflow/.local/lib/python3.8/site-packages/airflow/decorators/base.py", line 221, in execute return_value = super().execute(context) File "/home/airflow/.local/lib/python3.8/site-packages/airflow/operators/python.py", line 395, in execute return super().execute(context=serializable_context) File "/home/airflow/.local/lib/python3.8/site-packages/airflow/operators/python.py", line 192, in execute return_value = self.execute_callable() File "/home/airflow/.local/lib/python3.8/site-packages/airflow/operators/python.py", line 609, in execute_callable result = self._execute_python_callable_in_subprocess(python_path, tmp_path) File "/home/airflow/.local/lib/python3.8/site-packages/airflow/operators/python.py", line 463, in _execute_python_callable_in_subprocess raise AirflowException(error_msg) from None airflow.exceptions.AirflowException: Process returned non-zero exit status 1. __init__() missing 3 required positional arguments: 'hdfs_store_path', 'featurestore_description', and 'inode_id' [2023-10-26, 21:48:26 +07] {taskinstance.py:1400} INFO - Marking task as FAILED. dag_id=ml_pipeline, task_id=run_feature_pipeline, execution_date=20231026T144659, start_date=20231026T144705, end_date=20231026T144826 [2023-10-26, 21:48:26 +07] {standard_task_runner.py:104} ERROR - Failed to execute job 31 for task run_feature_pipeline (Process returned non-zero exit status 1. __init__() missing 3 required positional arguments: 'hdfs_store_path', 'featurestore_description', and 'inode_id'; 3001) [2023-10-26, 21:48:26 +07] {local_task_job_runner.py:228} INFO - Task exited with return code 1 [2023-10-26, 21:48:26 +07] {taskinstance.py:2778} INFO - 0 downstream tasks scheduled from follow-on schedule check
closed
2023-10-26T15:00:50Z
2023-10-27T03:24:48Z
https://github.com/iusztinpaul/energy-forecasting/issues/22
[]
minhct13
1
ranaroussi/yfinance
pandas
1,508
Regular market price for indices
If I use either ``` stock = yf.Ticker("^DJI") stock = yf.Ticker("DJIA") ``` there is no "regular market price," which there used to be. What happened?
closed
2023-04-26T19:03:25Z
2023-09-09T17:30:14Z
https://github.com/ranaroussi/yfinance/issues/1508
[]
tenaciouslyantediluvian
1
mlfoundations/open_clip
computer-vision
245
Fix stochasticity in tests
Some tests seem to pass/fail arbitrarily. Inference test seems to be the culprit: ``` =================================== FAILURES =================================== _________ test_inference_with_data[timm-swin_base_patch4_window7_224] __________ ```
closed
2022-11-23T04:35:22Z
2022-12-09T00:40:10Z
https://github.com/mlfoundations/open_clip/issues/245
[ "bug" ]
iejMac
8
graphdeco-inria/gaussian-splatting
computer-vision
910
Could not show results with SIBR_gaussian_viewer
I have trained the model with the provided scene of truck on my ubuntu server which has no GUI. As suggessted, I use xvfb-run like the following: ```shell xvfb-run -s "-screen 0 1400x900x24" SIBR_viewers/install/bin/SIBR_gaussianViewer_app -m /root/onethingai-tmp/3dgs/output/truck/ ``` But it reports some errors and stucks after that, as the following: ![image](https://github.com/user-attachments/assets/f08e3919-482f-436e-99ef-5d718852e764) How to tackle the error and show the 3D scene?
open
2024-07-28T11:06:13Z
2024-07-28T19:42:48Z
https://github.com/graphdeco-inria/gaussian-splatting/issues/910
[]
JackeyLee007
1
computationalmodelling/nbval
pytest
204
`--sanitize-with` option seems to be behaving weirdly with "newly computed (test) output"
We have this failure below and we could not understand why our `output-sanitize.cfg` regex file is unable to cover the diff. Why is the "**newly computed (test) output**" has the "**Text(0.5, 91.20243008191655, 'Longitude')**" part in it. _ pavics-sdi-fix_nbs_jupyter_alpha/docs/source/notebooks/regridding.ipynb::Cell 27 _ Notebook cell execution failed Cell 27: Cell outputs differ Input: # Now we can plot easily the results as a choropleth map! ax = shapes_data.plot( "tasmin", legend=True, legend_kwds={"label": "Minimal temperature 1993-05-20 [K]"} ) ax.set_ylabel("Latitude") ax.set_xlabel("Longitude"); Traceback: dissimilar number of outputs for key "text/plain"<<<<<<<<<<<< Reference outputs from ipynb file: <Figure size LENGTHxWIDTH with N Axes> ============ disagrees with newly computed (test) output: Text(0.5, 91.20243008191655, 'Longitude') <Figure size LENGTHxWIDTH with N Axes> >>>>>>>>>>>> `output-sanitize.cfg`: ``` [finch-figure-size] regex: <Figure size \d+x\d+\swith\s\d\sAxes> replace: <Figure size LENGTHxWIDTH with N Axes> ``` Full `output-sanitize.cfg` file: https://github.com/Ouranosinc/PAVICS-e2e-workflow-tests/blob/a4592eab55ad177b00cba77f126be56ff6566287/notebooks/output-sanitize.cfg#L92-L94 Notebook: https://github.com/Ouranosinc/pavics-sdi/blob/4ffec2df463b413c78991e9481fbe182537b3a65/docs/source/notebooks/regridding.ipynb command: py.test --nbval pavics-sdi-fix_nbs_jupyter_alpha_refresh_output/docs/source/notebooks/regridding.ipynb --sanitize-with notebooks/output-sanitize.cfg --dist=loadscope --numprocesses=0 ============================= test session starts ============================== platform linux -- Python 3.10.13, pytest-8.1.1, pluggy-1.4.0 rootdir: /home/jenkins/agent/workspace/_workflow-tests_new-docker-build plugins: anyio-4.3.0, dash-2.16.1, nbval-0.11.0, tornasync-0.6.0.post2, xdist-3.5.0
open
2024-03-23T19:21:11Z
2024-03-23T19:33:05Z
https://github.com/computationalmodelling/nbval/issues/204
[]
tlvu
0
xlwings/xlwings
automation
1,802
add .clear_formats() to Sheet & Range
hi Felix, the clear() and clear_contents() are already available but not the .clear_formats(). it could be useful to add it (today, I monkey patch xlwings when needed. tell me if you would like a PR (the changes are trivials ;-))
closed
2022-01-27T09:28:30Z
2022-02-06T08:16:40Z
https://github.com/xlwings/xlwings/issues/1802
[ "enhancement" ]
sdementen
1
amidaware/tacticalrmm
django
1,124
NAT Service not working outside of local network.
The port 4222 tcp is not working outside of my local network. Telnet works locally but outside of the network does not work... IP tables are currently accepting incoming and outgoing connections and ufw is disabled... Don't know what the issue is.
closed
2022-05-11T18:53:50Z
2022-05-11T18:54:24Z
https://github.com/amidaware/tacticalrmm/issues/1124
[]
ashermyers
0
getsentry/sentry
python
86,994
Include additional User Feedback details when creating a Jira Server Issue from Sentry User Feedback
### Problem Statement When creating an issue in Jira Server from User Feedback in Sentry, the issue is successfully created, however, the description currently only includes a URL link to Sentry. I would like the Jira Server issue description to include more detailed information from the User Feedback, such as the user's username, their message, and any attached screenshots. Reported [via this ZD ticket](https://sentry.zendesk.com/agent/tickets/147414). ### Solution Brainstorm _No response_ ### Product Area User Feedback
open
2025-03-13T15:45:14Z
2025-03-13T20:06:41Z
https://github.com/getsentry/sentry/issues/86994
[ "Product Area: User Feedback" ]
kerenkhatiwada
3
autokey/autokey
automation
584
Insert phrase using abbreviation and tab not working as expected
## Classification:Bug (Pick one of: Bug, Crash/Hang/Data Loss, Performance, UI/Usability, Feature (New), Enhancement) ## Reproducibility: Always (?) (Pick one of: Always, Sometimes, Rarely, Unable, I Didn't Try) ## Version 0.96.0-beta.5 AutoKey version: Used GUI (Gtk, Qt, or both): gtk If the problem is known to be present in more than one version, please list all of those. Installed via: (PPA, pip3, …). Linux Distribution: Linux Mint 20.2 Uma \n \l ## Summary In Libre Office writer using backspace to delete the tab after phrase inserted deletes every thing except first few replaced characters, (so far no of characters remaining doesn't seem dependent on length of inserted phrase). In sticky notes the back space deletes everything - acts as undo and leaves just the abbreviation. In google mail the phrase is inserted but the abbreviation isn't removed, and using backspace to delete the tab leaves the abbreviation and random characters ## Steps to Reproduce (if applicable) - I do this set up an abbreviation to enter a phrase using tab as the trigger - as this eg (edited - had uploaded wrong screen shot) - ![Screenshot from 2021-07-20 12-03-46](https://user-images.githubusercontent.com/85312182/126313704-ccf077c2-b459-4d0a-98e7-a1cdbbaa993e.png) - I do that press backspace to get rid of the tab. - - eg in screenshot gives 'ful' in LO - another eg apple crumble & custard - trigger abbreviation is app - gives Appl - in sticky notes the original abbreviation - in google mail ![Screenshot from 2021-07-19 23-45-54](https://user-images.githubusercontent.com/85312182/126237345-23189e80-7764-4d4d-b08a-837f1e25151d.png) ## Expected Results just the phrase remains - This should happen. ## Actual Results Varies depends on application used as in summary. - Instead, this happens. :( If helpful, submit screenshots of the issue to help debug.\ Debugging output, obtained by launching autokey via `autokey-gtk --verbose` (or `autokey-qt --verbose`, if you use the Qt interface) is also useful.\ Please upload the log somewhere accessible or put the output into a code block (enclose in triple backticks). ``` Example code block. Replace this whith your log content. ``` ## Notes Describe any debugging steps you've taken yourself. If you've found a workaround, please provide it here. Press enter/click elsewhere and then you can delete the tab
open
2021-07-19T22:55:19Z
2025-02-27T09:01:31Z
https://github.com/autokey/autokey/issues/584
[ "bug", "autokey triggers", "user support" ]
unlucky67
9
pydantic/FastUI
fastapi
341
could this project be simplified/refactored with FastHTML?
https://github.com/AnswerDotAI/fasthtml Maybe all the complications with NPM, React, bundling that are not familiar to python users like myself could be removed?
closed
2024-08-09T18:03:57Z
2024-10-10T07:19:18Z
https://github.com/pydantic/FastUI/issues/341
[]
rbavery
6
scanapi/scanapi
rest-api
362
Remove changelog entries that should't be there
## Description Following our [changelog guide](https://github.com/scanapi/scanapi/wiki/Changelog#what-warrants-a-changelog-entry) there are some changelog entries that should not be there in the `Unreleased` section. We need to clean it up before a release.
closed
2021-04-22T14:34:15Z
2021-04-22T14:47:16Z
https://github.com/scanapi/scanapi/issues/362
[ "Refactor" ]
camilamaia
0
ets-labs/python-dependency-injector
flask
105
Rename AbstractCatalog to DeclarativeCatalog (with backward compatibility)
closed
2015-11-09T22:04:08Z
2015-11-10T08:42:55Z
https://github.com/ets-labs/python-dependency-injector/issues/105
[ "feature", "refactoring" ]
rmk135
0
roboflow/supervision
pytorch
844
[PolygonZone] - allow `triggering_position` to be `Iterable[Position]`
### Description Update [`PolygonZone`](https://github.com/roboflow/supervision/blob/87a4927d03b6d8ec57208e8e3d01094135f9c829/supervision/detection/tools/polygon_zone.py#L15) logic, triggering the zone when not one anchor but multiple anchors are inside the zone. This type of logic is already implemented in [`LineZone`](https://github.com/roboflow/supervision/blob/87a4927d03b6d8ec57208e8e3d01094135f9c829/supervision/detection/line_counter.py#L12). - Rename `triggering_position` to `triggering_anchors` to be consistent with the `LineZone` naming convention. - `Update type of argument from `Position` to `Iterable[Position] - Maintain the default behavior. The zone should, by default, be triggered by `Position.BOTTOM_CENTER`. ### API ```python class PolygonZone: def __init__( self, polygon: np.ndarray, frame_resolution_wh: Tuple[int, int], triggering_anchors: Iterable[Position] = (Position.BOTTOM_CENTER, ) ): pass def trigger(self, detections: Detections) -> np.ndarray: pass ``` ### Additional - Note: Please share a Google Colab with minimal code to test the new feature. We know it's additional work, but it will speed up the review process. The reviewer must test each change. Setting up a local environment to do this is time-consuming. Please ensure that Google Colab can be accessed without any issues (make it public). Thank you! 🙏🏻
closed
2024-02-02T17:51:07Z
2024-12-12T09:37:48Z
https://github.com/roboflow/supervision/issues/844
[ "enhancement", "Q1.2024", "api:polygonzone" ]
SkalskiP
8
pytorch/vision
computer-vision
8,450
Let `v2.functional.gaussian_blur` backprop through `sigma` parameter
the v1 version of `gaussian_blur` allows to backprop through sigma (example taken from https://github.com/pytorch/vision/issues/8401) ``` import torch from torchvision.transforms.functional import gaussian_blur device = "cuda" device = "cpu" k = 15 s = torch.tensor(0.3 * ((5 - 1) * 0.5 - 1) + 0.8, requires_grad=True, device=device) blurred = gaussian_blur(torch.randn(1, 3, 256, 256, device=device), k, [s]) blurred.mean().backward() print(s.grad) ``` on CPU and on GPU (after https://github.com/pytorch/vision/pull/8426). However, the v2 version fails with ``` RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn ``` The support in v1 is sort of undocumented and probably just works out of luck (sigma is typically expected to be a list of floats rather than a tensor). So while it works, it's not 100% clear to me whether this is a feature we absolutely want. I guess we can implement it if it doesn't make the code much more complex or slower.
closed
2024-05-29T12:45:21Z
2024-07-29T15:45:14Z
https://github.com/pytorch/vision/issues/8450
[]
NicolasHug
3
yvann-ba/Robby-chatbot
streamlit
48
streamlit cloud server
Hello @yvann-hub , can you please write the steps of how to run this on streamlit cloud server? it is only working locally. Many thanks!
closed
2023-06-12T10:51:26Z
2023-06-15T09:34:37Z
https://github.com/yvann-ba/Robby-chatbot/issues/48
[]
AhmedEwis
4
pytorch/pytorch
python
149,138
FSDP with AveragedModel
I am trying to use FSDP with `torch.optim.swa_utils.AveragedModel`, but I am getting the error ``` File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/external/python_runtime_x86_64-unknown-linux-gnu/lib/python3.10/copy.py", line 161, in deepcopy rv = reductor(4) TypeError: cannot pickle 'module' object ``` This happens at `deepcopy.copy` inside `torch.optim.swa_utils.AveragedModel.__init__` and `module` seems to refer to `<module 'torch.cuda' from '/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/train.runfiles/pip-core_torch/site-packages/torch/cuda/__init__.py'>` 1. Is FSDP supposed to work with `torch.optim.swa_utils.AveragedModel`? 2. If not, how can one implement it? My plan to avoid `deepcopy` was to instead use a sharded state dict and compute the average separately on each rank to save memory. However, I can't find an easy way to convert the sharded state dict back to a full state dict offloaded to CPU when I need to save the state dict. Any tips on that? cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @zhaojuanmao @mrshenli @rohan-varma @chauhang
open
2025-03-13T17:46:52Z
2025-03-21T14:55:27Z
https://github.com/pytorch/pytorch/issues/149138
[ "oncall: distributed", "module: fsdp" ]
nikonikolov
2
ultralytics/yolov5
pytorch
12,534
When running the validation set to save labels, there is no labels folder in the expX folder.
### Search before asking - [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report. ### YOLOv5 Component _No response_ ### Bug When running the validation set to save labels, there is no labels folder in the expX folder. Therefore, line 261 of code val.py will report an error. Should I add code here: ### Environment _No response_ ### Minimal Reproducible Example if save_txt: if not os.path.isdir(save_dir / 'labels'): os.mkdir(save / 'labels') save_one_txt(predn, save_conf, shape, file=save_dir / 'labels' / f'{path.stem}.txt') ### Additional _No response_ ### Are you willing to submit a PR? - [x] Yes I'd like to help by submitting a PR!
closed
2023-12-21T11:49:49Z
2024-10-20T19:35:04Z
https://github.com/ultralytics/yolov5/issues/12534
[ "bug" ]
Gary55555
2
WeblateOrg/weblate
django
13,762
String translation history is deleted
### Describe the issue After changing a string in the remote repository, translations are deleted along with previous history. We have disabled "Use fuzzy matching" in 400 files (it inserted incorrect translations, several thousand incorrect ones). The problem has been there since the very beginning, when we started using weblate. To work around this issue, we generate one file with all changes with "fuzzy matching" enabled and add it to the project. We translate the changes file using automatic translation (automatic translation generates a lot of false translations, because it is not based on matching by string id). After updating the changes file with "fuzzy matching" enabled, the translations are not removed. ### I already tried - [x] I've read and searched [the documentation](https://docs.weblate.org/). - [x] I've searched for similar filed issues in this repository. ### Steps to reproduce the behavior 1) file settings: ![Image](https://github.com/user-attachments/assets/96280793-e3ae-4b64-9edd-11a7d93f9189) 2) Changing the string in the remote repository: ![Image](https://github.com/user-attachments/assets/057cb1da-6455-48f1-a358-d4cde832bd30) 3)A changed string that still has the same id is treated as a new string. The string translation has been deleted, and the previous translation history has also been deleted: ![Image](https://github.com/user-attachments/assets/be1b7c74-ae4d-4606-8335-0084bb482bf9) 4) The automatic suggestions are also empty because they do not fit within the threshold: ![Image](https://github.com/user-attachments/assets/0629992d-21b9-4726-bccb-e14f35ca0ae3) ### Expected behavior The history of previous translations is not deleted. Additionally, it would be useful to know that the string had a translation but was removed by the changes. ### Screenshots _No response_ ### Exception traceback ```pytb ``` ### How do you run Weblate? Docker container ### Weblate versions 5.10-dev — [0656bbd123047d8b44d77cb7f982093df625dd7a](https://github.com/WeblateOrg/weblate/commits/0656bbd123047d8b44d77cb7f982093df625dd7a) ### Weblate deploy checks ```shell ``` ### Additional context _No response_
open
2025-02-06T07:05:30Z
2025-02-28T08:05:59Z
https://github.com/WeblateOrg/weblate/issues/13762
[]
tomkolp
8
encode/httpx
asyncio
2,169
RFE: provide suport for `rfc3986` 2.0.0
Currently `rfc3986` 2.0.0 is not supported. https://github.com/encode/httpx/blob/3af5146788f2945c806d4225cd588a2aa8073b90/setup.py#L58-L62 Do you have any plans to provide such support?
closed
2022-04-07T08:06:04Z
2023-01-10T10:36:17Z
https://github.com/encode/httpx/issues/2169
[ "external" ]
kloczek
1
davidteather/TikTok-Api
api
182
[BUG] - 'browser' object has no attribute 'signature'
**Describe the bug** I think this may be a similar issue as the 'verifyFp' problem. It happens occasionally, despite my proxy server being in the US. ``` app/worker.14 [2020-07-14 16:38:17,509: WARNING/ForkPoolWorker-1] 'browser' object has no attribute 'signature' app/worker.14 save_tiktoks_for_user(record_id, username) app/worker.14 File "/app/tasks.py", line 29 app/worker.14 tiktoks = api.byUsername(tiktok_username, count=250) app/worker.14 File "/app/.heroku/python/lib/python3.7/site-packages/TikTokApi/tiktok.py", line 147, in byUsername app/worker.14 return self.getUser(username, language, proxy=proxy)['userInfo']['user'] app/worker.14 "&_signature=" + b.signature ```
closed
2020-07-14T16:41:02Z
2020-08-19T21:32:33Z
https://github.com/davidteather/TikTok-Api/issues/182
[ "bug" ]
kbyatnal
14
pydata/xarray
pandas
9,951
⚠️ Nightly upstream-dev CI failed ⚠️
[Workflow Run URL](https://github.com/pydata/xarray/actions/runs/12819944199) <details><summary>Python 3.12 Test Summary</summary> ``` xarray/tests/test_coding_times.py::test_encode_cf_timedelta_casting_value_error[False]: ValueError: output array is read-only xarray/tests/test_variable.py::TestVariable::test_index_0d_datetime: AssertionError: assert dtype('<M8[us]') == 'datetime64[ns]' + where dtype('<M8[us]') = np.datetime64('2000-01-01T00:00:00.000000').dtype xarray/tests/test_variable.py::TestVariable::test_datetime64_conversion[values5-ns]: AssertionError: assert dtype('<M8[us]') == dtype('<M8[ns]') + where dtype('<M8[us]') = <xarray.Variable (t: 3)> Size: 24B\narray(['1970-01-01T00:00:00.000000', '1970-01-02T00:00:00.000000',\n '1970-01-03T00:00:00.000000'], dtype='datetime64[us]').dtype + and dtype('<M8[ns]') = <class 'numpy.dtype'>('datetime64[ns]') + where <class 'numpy.dtype'> = np.dtype xarray/tests/test_variable.py::TestVariable::test_datetime64_conversion_scalar[values1-ns]: AssertionError: assert dtype('<M8[s]') == dtype('<M8[ns]') + where dtype('<M8[s]') = <xarray.Variable ()> Size: 8B\narray('2000-01-01T00:00:00', dtype='datetime64[s]').dtype + and dtype('<M8[ns]') = <class 'numpy.dtype'>('datetime64[ns]') + where <class 'numpy.dtype'> = np.dtype xarray/tests/test_variable.py::TestVariable::test_datetime64_conversion_scalar[values2-ns]: AssertionError: assert dtype('<M8[us]') == dtype('<M8[ns]') + where dtype('<M8[us]') = <xarray.Variable ()> Size: 8B\narray('2000-01-01T00:00:00.000000', dtype='datetime64[us]').dtype + and dtype('<M8[ns]') = <class 'numpy.dtype'>('datetime64[ns]') + where <class 'numpy.dtype'> = np.dtype xarray/tests/test_variable.py::TestVariable::test_0d_datetime: AssertionError: assert dtype('<M8[s]') == dtype('<M8[ns]') + where dtype('<M8[s]') = <xarray.Variable ()> Size: 8B\narray('2000-01-01T00:00:00', dtype='datetime64[s]').dtype + and dtype('<M8[ns]') = <class 'numpy.dtype'>('datetime64[ns]') + where <class 'numpy.dtype'> = np.dtype xarray/tests/test_variable.py::TestVariableWithDask::test_index_0d_datetime: AssertionError: assert dtype('<M8[us]') == 'datetime64[ns]' + where dtype('<M8[us]') = np.datetime64('2000-01-01T00:00:00.000000').dtype xarray/tests/test_variable.py::TestVariableWithDask::test_datetime64_conversion[values5-ns]: AssertionError: assert dtype('<M8[us]') == dtype('<M8[ns]') + where dtype('<M8[us]') = <xarray.Variable (t: 3)> Size: 24B\ndask.array<array, shape=(3,), dtype=datetime64[us], chunksize=(3,), chunktype=numpy.ndarray>.dtype + and dtype('<M8[ns]') = <class 'numpy.dtype'>('datetime64[ns]') + where <class 'numpy.dtype'> = np.dtype xarray/tests/test_variable.py::TestIndexVariable::test_index_0d_datetime: AssertionError: assert dtype('<M8[us]') == 'datetime64[ns]' + where dtype('<M8[us]') = np.datetime64('2000-01-01T00:00:00.000000').dtype xarray/tests/test_variable.py::TestIndexVariable::test_datetime64_conversion[values5-ns]: AssertionError: assert dtype('<M8[us]') == dtype('<M8[ns]') + where dtype('<M8[us]') = <xarray.IndexVariable 't' (t: 3)> Size: 24B\narray(['1970-01-01T00:00:00.000000', '1970-01-02T00:00:00.000000',\n '1970-01-03T00:00:00.000000'], dtype='datetime64[us]').dtype + and dtype('<M8[ns]') = <class 'numpy.dtype'>('datetime64[ns]') + where <class 'numpy.dtype'> = np.dtype xarray/tests/test_variable.py::TestAsCompatibleData::test_datetime: AssertionError: assert dtype('<M8[ns]') == dtype('<M8[us]') + where dtype('<M8[ns]') = <class 'numpy.dtype'>('datetime64[ns]') + where <class 'numpy.dtype'> = np.dtype + and dtype('<M8[us]') = array('2000-01-01T00:00:00.000000', dtype='datetime64[us]').dtype xarray/tests/test_variable.py::test_datetime_conversion[2000-01-01 00:00:00-ns]: AssertionError: assert dtype('<M8[us]') == dtype('<M8[ns]') + where dtype('<M8[us]') = <xarray.Variable ()> Size: 8B\narray('2000-01-01T00:00:00.000000', dtype='datetime64[us]').dtype + and dtype('<M8[ns]') = <class 'numpy.dtype'>('datetime64[ns]') + where <class 'numpy.dtype'> = np.dtype xarray/tests/test_variable.py::test_datetime_conversion[[datetime.datetime(2000, 1, 1, 0, 0)]-ns]: AssertionError: assert dtype('<M8[us]') == dtype('<M8[ns]') + where dtype('<M8[us]') = <xarray.Variable (time: 1)> Size: 8B\narray(['2000-01-01T00:00:00.000000'], dtype='datetime64[us]').dtype + and dtype('<M8[ns]') = <class 'numpy.dtype'>('datetime64[ns]') + where <class 'numpy.dtype'> = np.dtype ``` </details>
closed
2025-01-16T00:27:03Z
2025-01-17T13:16:48Z
https://github.com/pydata/xarray/issues/9951
[ "CI" ]
github-actions[bot]
0
nteract/testbook
pytest
137
testbook loads wrong signature for function
Hello, i have following test: ``` python import testbook import numpy as np from assertpy import assert_that from pytest import fixture @fixture(scope='module') def tb(): with testbook.testbook('h2_2.ipynb', execute=True) as tb: yield tb def test_some_small_function(tb): some_function = tb.ref("some_function") test_w_1_0 = np.array([1]) test_no_bias = np.array([0]) test_w_2_1 = np.array([1]) print(locals()) assert_that(some_function(0, test_w_1_0, test_no_bias, test_w_2_1, np.sign)).contains([0, 1, -1]) ``` having in the notebook: ``` python def some_function(input_var: float, first: np.ndarray, bias: np.ndarray, second: np.ndarray, the_transfer_function): h: np.ndarray = input_var * first - bias return np.dot(second, the_transfer_function(h)) ``` and I get the following TypeError error: > E TypeError Traceback (most recent call last) E /var/folders/jk/bq46f4ys107bjb6jwvn0yhqr0000gp/T/ipykernel_52132/3127954361.py in <module> E ----> 1 some_function(*(0, "[1]", "[0]", "[1]", "<ufunc 'sign'>", ), **{}) E E /var/folders/jk/bq46f4ys107bjb6jwvn0yhqr0000gp/T/ipykernel_52132/179938020.py in some_function(input_var, first, bias, second, the_transfer_function) E 1 def some_function(input_var: float, first: np.ndarray, bias: np.ndarray, second: np.ndarray, the_transfer_function): E ----> 2 h: np.ndarray = input_var * first - bias E 3 return np.dot(second, the_transfer_function(h)) E E TypeError: unsupported operand type(s) for -: 'str' and 'str' E TypeError: unsupported operand type(s) for -: 'str' and 'str' I have added extra type information to make it clear to you the intend of the code. Tests inside the notebook are running successfully > test_h2.py::test_some_small_function /Users/1000ber-5078/PycharmProjects/machine-intelligence/venv/lib/python3.8/site-packages/debugpy/_vendored/force_pydevd.py:20: UserWarning: incompatible copy of pydevd already imported: /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/__init__.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/_pydev_calltip_util.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/_pydev_completer.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/_pydev_filesystem_encoding.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/_pydev_imports_tipper.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/_pydev_tipper_common.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/fix_getpass.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_code_executor.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_console_types.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_imports.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_ipython_code_executor.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_ipython_console_011.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_is_thread_alive.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_log.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_monkey.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_override.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_stdin.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_imps/__init__.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_imps/_pydev_saved_modules.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/__init__.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_additional_thread_info.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_additional_thread_info_regular.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_breakpointhook.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_breakpoints.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_bytecode_utils.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_collect_try_except_info.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_comm.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_comm_constants.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_command_line_handling.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_console.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_console_integration.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_console_pytest.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_constants.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_custom_frames.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_cython_darwin_38_64.cpython-38-darwin.so /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_cython_darwin_38_64.cpython-38-darwin.so /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_cython_wrapper.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_dont_trace.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_dont_trace_files.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_exec2.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_extension_api.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_extension_utils.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_frame.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_frame_utils.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_import_class.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_io.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_kill_all_pydevd_threads.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_plugin_utils.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_process_net_command.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_resolver.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_save_locals.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_signature.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_tables.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_trace_api.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_trace_dispatch.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_traceproperty.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_utils.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_vars.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_vm_type.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_xml.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_frame_eval/__init__.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_frame_eval/pydevd_frame_eval_main.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydev_ipython/__init__.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydev_ipython/inputhook.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydev_ipython/matplotlibtools.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevd.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevd_concurrency_analyser/__init__.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevd_concurrency_analyser/pydevd_concurrency_logger.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevd_concurrency_analyser/pydevd_thread_wrappers.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevd_file_utils.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevd_plugins/__init__.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevd_plugins/django_debug.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevd_plugins/extensions/__init__.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevd_plugins/extensions/types/__init__.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevd_plugins/extensions/types/pydevd_helpers.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevd_plugins/extensions/types/pydevd_plugin_numpy_types.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevd_plugins/extensions/types/pydevd_plugins_django_form_str.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevd_plugins/jinja2_debug.py /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevd_tracing.py warnings.warn(msg + ':\n {}'.format('\n '.join(_unvendored))) [IPKernelApp] WARNING | debugpy_stream undefined, debugging will not be enabled FAILED [100%]{'tb': <testbook.client.TestbookNotebookClient object at 0x7f9ef824d2e0>, 'some_function': '<function some_function at 0x7fb2b2e70c10>', 'test_w_1_0': array([1]), 'test_no_bias': array([0]), 'test_w_2_1': array([1])} test_h2.py:12 (test_some_small_function) self = <testbook.client.TestbookNotebookClient object at 0x7f9ef824d2e0> cell = [8], kwargs = {}, cell_indexes = [8], executed_cells = [], idx = 8 def execute_cell(self, cell, **kwargs) -> Union[Dict, List[Dict]]: """ Executes a cell or list of cells """ if isinstance(cell, slice): start, stop = self._cell_index(cell.start), self._cell_index(cell.stop) if cell.step is not None: raise TestbookError('testbook does not support step argument') cell = range(start, stop + 1) elif isinstance(cell, str) or isinstance(cell, int): cell = [cell] cell_indexes = cell if all(isinstance(x, str) for x in cell): cell_indexes = [self._cell_index(tag) for tag in cell] executed_cells = [] for idx in cell_indexes: try: > cell = super().execute_cell(self.nb['cells'][idx], idx, **kwargs) ../venv/lib/python3.8/site-packages/testbook/client.py:133: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = (<testbook.client.TestbookNotebookClient object at 0x7f9ef824d2e0>, {'id': '6ac327d5', 'cell_type': 'code', 'metadata'...0m\x1b[0;34m\x1b[0m\x1b[0m\n', "\x1b[0;31mTypeError\x1b[0m: unsupported operand type(s) for -: 'str' and 'str'"]}]}, 8) kwargs = {} def wrapped(*args, **kwargs): > return just_run(coro(*args, **kwargs)) ../venv/lib/python3.8/site-packages/nbclient/util.py:78: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ coro = <coroutine object NotebookClient.async_execute_cell at 0x7f9ef81fea40> def just_run(coro: Awaitable) -> Any: """Make the coroutine run, even if there is an event loop running (using nest_asyncio)""" # original from vaex/asyncio.py loop = asyncio._get_running_loop() if loop is None: had_running_loop = False try: loop = asyncio.get_event_loop() except RuntimeError: # we can still get 'There is no current event loop in ...' loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) else: had_running_loop = True if had_running_loop: # if there is a running loop, we patch using nest_asyncio # to have reentrant event loops check_ipython() import nest_asyncio nest_asyncio.apply() check_patch_tornado() > return loop.run_until_complete(coro) ../venv/lib/python3.8/site-packages/nbclient/util.py:57: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <_UnixSelectorEventLoop running=False closed=False debug=False> future = <Task finished name='Task-48' coro=<NotebookClient.async_execute_cell() done, defined at /Users/1000ber-5078/PycharmPr...rted operand type(s) for -: \'str\' and \'str\'\nTypeError: unsupported operand type(s) for -: \'str\' and \'str\'\n')> def run_until_complete(self, future): """Run until the Future is done. If the argument is a coroutine, it is wrapped in a Task. WARNING: It would be disastrous to call run_until_complete() with the same coroutine twice -- it would wrap it in two different Tasks and that can't be good. Return the Future's result, or raise its exception. """ self._check_closed() self._check_running() new_task = not futures.isfuture(future) future = tasks.ensure_future(future, loop=self) if new_task: # An exception is raised if the future didn't complete, so there # is no need to log the "destroy pending task" message future._log_destroy_pending = False future.add_done_callback(_run_until_complete_cb) try: self.run_forever() except: if new_task and future.done() and not future.cancelled(): # The coroutine raised a BaseException. Consume the exception # to not log a warning, the caller doesn't have access to the # local task. future.exception() raise finally: future.remove_done_callback(_run_until_complete_cb) if not future.done(): raise RuntimeError('Event loop stopped before Future completed.') > return future.result() /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/asyncio/base_events.py:616: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <testbook.client.TestbookNotebookClient object at 0x7f9ef824d2e0> cell = {'id': '6ac327d5', 'cell_type': 'code', 'metadata': {'execution': {'iopub.status.busy': '2021-10-23T15:57:12.875064Z',...x1b[0m\x1b[0;34m\x1b[0m\x1b[0m\n', "\x1b[0;31mTypeError\x1b[0m: unsupported operand type(s) for -: 'str' and 'str'"]}]} cell_index = 8, execution_count = None, store_history = True async def async_execute_cell( self, cell: NotebookNode, cell_index: int, execution_count: t.Optional[int] = None, store_history: bool = True) -> NotebookNode: """ Executes a single code cell. To execute all cells see :meth:`execute`. Parameters ---------- cell : nbformat.NotebookNode The cell which is currently being processed. cell_index : int The position of the cell within the notebook object. execution_count : int The execution count to be assigned to the cell (default: Use kernel response) store_history : bool Determines if history should be stored in the kernel (default: False). Specific to ipython kernels, which can store command histories. Returns ------- output : dict The execution output payload (or None for no output). Raises ------ CellExecutionError If execution failed and should raise an exception, this will be raised with defaults about the failure. Returns ------- cell : NotebookNode The cell which was just processed. """ assert self.kc is not None if cell.cell_type != 'code' or not cell.source.strip(): self.log.debug("Skipping non-executing cell %s", cell_index) return cell if self.record_timing and 'execution' not in cell['metadata']: cell['metadata']['execution'] = {} self.log.debug("Executing cell:\n%s", cell.source) cell_allows_errors = (not self.force_raise_errors) and ( self.allow_errors or "raises-exception" in cell.metadata.get("tags", [])) parent_msg_id = await ensure_async( self.kc.execute( cell.source, store_history=store_history, stop_on_error=not cell_allows_errors ) ) # We launched a code cell to execute self.code_cells_executed += 1 exec_timeout = self._get_timeout(cell) cell.outputs = [] self.clear_before_next_output = False task_poll_kernel_alive = asyncio.ensure_future( self._async_poll_kernel_alive() ) task_poll_output_msg = asyncio.ensure_future( self._async_poll_output_msg(parent_msg_id, cell, cell_index) ) self.task_poll_for_reply = asyncio.ensure_future( self._async_poll_for_reply( parent_msg_id, cell, exec_timeout, task_poll_output_msg, task_poll_kernel_alive ) ) try: exec_reply = await self.task_poll_for_reply except asyncio.CancelledError: # can only be cancelled by task_poll_kernel_alive when the kernel is dead task_poll_output_msg.cancel() raise DeadKernelError("Kernel died") except Exception as e: # Best effort to cancel request if it hasn't been resolved try: # Check if the task_poll_output is doing the raising for us if not isinstance(e, CellControlSignal): task_poll_output_msg.cancel() finally: raise if execution_count: cell['execution_count'] = execution_count > self._check_raise_for_error(cell, exec_reply) ../venv/lib/python3.8/site-packages/nbclient/client.py:862: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <testbook.client.TestbookNotebookClient object at 0x7f9ef824d2e0> cell = {'id': '6ac327d5', 'cell_type': 'code', 'metadata': {'execution': {'iopub.status.busy': '2021-10-23T15:57:12.875064Z',...x1b[0m\x1b[0;34m\x1b[0m\x1b[0m\n', "\x1b[0;31mTypeError\x1b[0m: unsupported operand type(s) for -: 'str' and 'str'"]}]} exec_reply = {'buffers': [], 'content': {'ename': 'TypeError', 'engine_info': {'engine_id': -1, 'engine_uuid': '7b8e70ee-5b55-4029-...e, 'engine': '7b8e70ee-5b55-4029-888b-a23a76f683ca', 'started': '2021-10-23T15:57:12.869603Z', 'status': 'error'}, ...} def _check_raise_for_error( self, cell: NotebookNode, exec_reply: t.Optional[t.Dict]) -> None: if exec_reply is None: return None exec_reply_content = exec_reply['content'] if exec_reply_content['status'] != 'error': return None cell_allows_errors = (not self.force_raise_errors) and ( self.allow_errors or exec_reply_content.get('ename') in self.allow_error_names or "raises-exception" in cell.metadata.get("tags", [])) if not cell_allows_errors: > raise CellExecutionError.from_cell_and_msg(cell, exec_reply_content) E nbclient.exceptions.CellExecutionError: An error occurred while executing the following cell: E ------------------ E E some_function(*(0, "[1]", "[0]", "[1]", "<ufunc 'sign'>", ), **{}) E E ------------------ E E --------------------------------------------------------------------------- E TypeError Traceback (most recent call last) E /var/folders/jk/bq46f4ys107bjb6jwvn0yhqr0000gp/T/ipykernel_52132/3127954361.py in <module> E ----> 1 some_function(*(0, "[1]", "[0]", "[1]", "<ufunc 'sign'>", ), **{}) E E /var/folders/jk/bq46f4ys107bjb6jwvn0yhqr0000gp/T/ipykernel_52132/179938020.py in some_function(input_var, first, bias, second, the_transfer_function) E 1 def some_function(input_var: float, first: np.ndarray, bias: np.ndarray, second: np.ndarray, the_transfer_function): E ----> 2 h: np.ndarray = input_var * first - bias E 3 return np.dot(second, the_transfer_function(h)) E E TypeError: unsupported operand type(s) for -: 'str' and 'str' E TypeError: unsupported operand type(s) for -: 'str' and 'str' ../venv/lib/python3.8/site-packages/nbclient/client.py:765: CellExecutionError During handling of the above exception, another exception occurred: tb = <testbook.client.TestbookNotebookClient object at 0x7f9ef824d2e0> def test_some_small_function(tb): some_function = tb.ref("some_function") test_w_1_0 = np.array([1]) test_no_bias = np.array([0]) test_w_2_1 = np.array([1]) print(locals()) > assert_that(some_function(0, test_w_1_0, test_no_bias, test_w_2_1, np.sign)).contains([0, 1, -1]) test_h2.py:19: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../venv/lib/python3.8/site-packages/testbook/reference.py:85: in __call__ return self.tb.value(code) ../venv/lib/python3.8/site-packages/testbook/client.py:273: in value result = self.inject(code, pop=True) ../venv/lib/python3.8/site-packages/testbook/client.py:237: in inject cell = TestbookNode(self.execute_cell(inject_idx)) if run else TestbookNode(code_cell) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <testbook.client.TestbookNotebookClient object at 0x7f9ef824d2e0> cell = [8], kwargs = {}, cell_indexes = [8], executed_cells = [], idx = 8 def execute_cell(self, cell, **kwargs) -> Union[Dict, List[Dict]]: """ Executes a cell or list of cells """ if isinstance(cell, slice): start, stop = self._cell_index(cell.start), self._cell_index(cell.stop) if cell.step is not None: raise TestbookError('testbook does not support step argument') cell = range(start, stop + 1) elif isinstance(cell, str) or isinstance(cell, int): cell = [cell] cell_indexes = cell if all(isinstance(x, str) for x in cell): cell_indexes = [self._cell_index(tag) for tag in cell] executed_cells = [] for idx in cell_indexes: try: cell = super().execute_cell(self.nb['cells'][idx], idx, **kwargs) except CellExecutionError as ce: > raise TestbookRuntimeError(ce.evalue, ce, self._get_error_class(ce.ename)) E testbook.exceptions.TestbookRuntimeError: An error occurred while executing the following cell: E ------------------ E E some_function(*(0, "[1]", "[0]", "[1]", "<ufunc 'sign'>", ), **{}) E E ------------------ E E --------------------------------------------------------------------------- E TypeError Traceback (most recent call last) E /var/folders/jk/bq46f4ys107bjb6jwvn0yhqr0000gp/T/ipykernel_52132/3127954361.py in <module> E ----> 1 some_function(*(0, "[1]", "[0]", "[1]", "<ufunc 'sign'>", ), **{}) E E /var/folders/jk/bq46f4ys107bjb6jwvn0yhqr0000gp/T/ipykernel_52132/179938020.py in some_function(input_var, first, bias, second, the_transfer_function) E 1 def some_function(input_var: float, first: np.ndarray, bias: np.ndarray, second: np.ndarray, the_transfer_function): E ----> 2 h: np.ndarray = input_var * first - bias E 3 return np.dot(second, the_transfer_function(h)) E E TypeError: unsupported operand type(s) for -: 'str' and 'str' E TypeError: unsupported operand type(s) for -: 'str' and 'str' ../venv/lib/python3.8/site-packages/testbook/client.py:135: TestbookRuntimeError ======================== 1 failed, 12 warnings in 2.97s ======================== Process finished with exit code 1
open
2021-10-23T16:00:15Z
2021-10-30T19:20:55Z
https://github.com/nteract/testbook/issues/137
[]
midumitrescu
1
kubeflow/katib
scikit-learn
1,789
Display more detailed information and logs of experiment on the user interface
/kind feature **Describe the solution you'd like** [A clear and concise description of what you want to happen.] We may often meet a situation that everything seems Ok on the UI,the status of experiment and trials are remaining running, but actually there are something wrong have already happened with the experiment and trials. To determine if everything is ok, we have to use commands like `kubectl logs` and `kubectl describe` to check logs and Events of almost every resource associating katib experiment. This is very unfriendly for data scientists, who may want focus on machine learning problems. **Therefore, I suggest to display more detailed information and logs of experiment on the user interface, make us easy to know if the experiment and trials are actrually runing well from the UI.** [Miscellaneous information that will assist in solving the issue.] --- <!-- Don't delete this message to encourage users to support your issue! --> Love this feature? Give it a 👍 We prioritize the features with the most 👍
closed
2022-01-24T09:24:50Z
2022-02-11T18:13:12Z
https://github.com/kubeflow/katib/issues/1789
[ "kind/feature" ]
javen218
4
tqdm/tqdm
pandas
1,574
Add wrapper for functions that are called repeatedly
## Proposed enhancement Add a decorator that wraps a function in a progress bar when you expect a specific number of calls to that function. ## Use case As a specific example I'd like to look at creating animations with matplotlib. The workflow is roughly to create a figure and then providing a method to update the figure each frame. When creating the animation matplotlib repeatedly calls the update method. Matplotlib does not support a progress bar as far as I know, however, if we wrap the update method ourselves we could still provide the user with feedback on the progress of their animation. ## Proposed implementation ```python def progress_bar_decorator(*tqdm_args, **tqdm_kwargs): progress_bar = tqdm(*tqdm_args, **tqdm_kwargs) def decorator(f): @functools.wraps(f) def wrapper(*args, **kwargs): result = f(*args, **kwargs) progress_bar.update() return result return wrapper return decorator ``` In the example of the matplotlib animation it can then be used as such: ```python figure, axes = plt.subplots(1, 1) scatter = axes.scatter(X[0], Y[0]) @progress_bar_decorator(total=len(X), desc="Generating animation") def update(frame): nonlocal scatter scatter.set_offsets(X[frame], Y[frame]) return scatter FuncAnimation(figure, update, frames=len(X), ...).save("file.mp4") ``` Matplotlib will call `update` exactly `len(X)` times causing `tqdm` to update the progress bar accordingly. In this example `X` and `Y` are some fictional data arrays.
open
2024-04-26T10:44:18Z
2024-04-26T10:44:18Z
https://github.com/tqdm/tqdm/issues/1574
[]
jvdoorn
0
junyanz/pytorch-CycleGAN-and-pix2pix
deep-learning
996
Need some clarification
I am using both pix2pix and cycleGan models for my thesis research. I have a custom dataset with 417 image pairs. I have done a lots of experiments using your repositories. I have few questions. Some of my questions are generic as well. I use two gpus with 8GB memory each. so in total 16 GB GPU machine. 1) I understand the batch size as a hyperparamater. Currently i use 4 as a batch size for my experiments. But apart from increasing the training speed, what is the importance of batch size? 2) What does normalisation does? I use grayscale images during training. What is the difference between batch and instance normalization. And why it is not possible to use batch normalisation in multi GPU training? 3) Use of dropout and eval flags. When i train pix2pix i never use --dropout flag during testing but for cycleGAN i do. Why is it so? And when should i use --eval flag? 4) I use resnet9block generator as i train rectangular images. Is there any advantage of using unet architecture over resnet. I did not use unet since my images are 600*400. 5) I read a commit comment few days back about the semialigned dataset. Can you please brief about it and how to use it. Can it be used for both pix2pix and cycleGAN models? 6)Is it possible to save the training graphs from the visdom server? @junyanz , Sorry for asking too many questions :).. Thanks for such a great repository.
closed
2020-04-17T08:25:25Z
2020-04-22T10:27:45Z
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/996
[]
kalai2033
3
ultralytics/ultralytics
deep-learning
19,614
jetpack=5.1.3, pt export engine OOM
### Search before asking - [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report. ### Ultralytics YOLO Component _No response_ ### Bug When I converted the pt to an engine file, there was an OOM ### Environment Jetson Nano Jetpack: 5.1.3 TensorRT: 8.5.2.2 CUDA: 11.4.315 Python: 3.8.10 torch 2.1.0a0+41361538.nv23.6 torchvision 0.16.0 ultralytics 8.2.35 onnxruntime-gpu 1.17.0 onnx 1.17.0 ### Minimal Reproducible Example hers's my code: ```python from ultralytics import YOLO model_path = 'best.pt' model = YOLO(model_path) model.export(format='engine') # load trt_model = YOLO('best.engine', task='detect') ``` ### Additional Log: ```txt (.perf) nvidia@nvidia:~/work$ python3 pt_to_engine.py WARNING ⚠️ TensorRT requires GPU export, automatically assigning device=0 Ultralytics YOLOv8.2.35 🚀 Python-3.8.10 torch-2.1.0a0+41361538.nv23.06 CUDA:0 (Orin, 3426MiB) Model summary (fused): 168 layers, 3006623 parameters, 0 gradients, 8.1 GFLOPs PyTorch: starting from 'best.pt' with input shape (1, 3, 416, 416) BCHW and output shape(s) (1, 9, 3549) (5.9 MB) ONNX: starting export with onnx 1.17.0 opset 17... ====== Diagnostic Run torch.onnx.export version 2.1.0a0+41361538.nv23.06 ======= verbose: False, log level: Level.ERROR ======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ======================== ONNX: export success ✅ 1.5s, saved as 'best.onnx' (11.6 MB) TensorRT: starting export with TensorRT 8.5.2.2... [03/10/2025-18:09:44] [TRT] [I] [MemUsageChange] Init CUDA: CPU +215, GPU +0, now: CPU 1834, GPU 3324 (MiB) Killed ``` ### Are you willing to submit a PR? - [ ] Yes I'd like to help by submitting a PR!
open
2025-03-10T10:10:19Z
2025-03-11T04:02:10Z
https://github.com/ultralytics/ultralytics/issues/19614
[ "bug", "embedded", "exports" ]
feiniua
4
thunlp/OpenPrompt
nlp
25
Test performance mismatch between training and testing phases
I used the default setting in the `experiments` directory for classification and found that the performance during training is **inconsistent** with that during testing. Here are the commands for reproducing the issue. - training: `python cli.py --config_yaml classification_softprompt.yaml` - test: `python cli.py --config_yaml classification_softprompt.yaml --test --resume` I append the following lines to the yaml file to load the trained model: >logging: &nbsp;&nbsp;path: logs/agnews_bert-base-cased_soft_manual_template_manual_verbalizer_211023110855 The training log shows the performance on the test set is: > trainer.evaluate Test Performance: micro-f1: 0.7927631578947368 However, the testing log says: > trainer.evaluate Test Performance: micro-f1: 0.8102631578947368
closed
2021-10-23T03:26:28Z
2021-11-06T15:36:01Z
https://github.com/thunlp/OpenPrompt/issues/25
[]
huchinlp
1
identixone/fastapi_contrib
pydantic
166
Allow overriding settings from fastAPI app
* FastAPI Contrib version: 0.2.9 * FastAPI version: 0.52.0 * Python version: 3.8.2 * Operating System: Linux ### Description Currently we can only override settings with settings from the fastAPI app by using environment variables (See the [Todo](https://github.com/identixone/fastapi_contrib/blob/081670603917b1b7e9646c75fba5614b09823a3e/fastapi_contrib/conf.py#L72)). Can we implement this functionaility, so we do not have to go through the environment variables?
closed
2021-02-11T08:37:00Z
2021-03-17T08:42:11Z
https://github.com/identixone/fastapi_contrib/issues/166
[]
pheanex
2
SALib/SALib
numpy
541
SALib.sample.sobol.sample raises ValueError with "skip_values" set to zero.
Using SALib 1.4.6 with Python 3.10.6. The code to reproduce the error is as follows: ``` from SALib.sample.sobol import sample from SALib.test_functions import Ishigami import numpy as np problem = { 'num_vars': 3, 'names': ['x1', 'x2', 'x3'], 'bounds': [[-3.14159265359, 3.14159265359], [-3.14159265359, 3.14159265359], [-3.14159265359, 3.14159265359]] } param_values = sample(problem, 1024, skip_values=0) ``` Error message is as follows: ``` ValueError Traceback (most recent call last) Cell In [22], line 13 4 import numpy as np 6 problem = { 7 'num_vars': 3, 8 'names': ['x1', 'x2', 'x3'], (...) 11 [-3.14159265359, 3.14159265359]] 12 } ---> 13 param_values = sample(problem, 1024, skip_values=0) File ~/opt/anaconda3/envs/sensitivity/lib/python3.10/site-packages/SALib/sample/sobol.py:129, in sample(problem, N, calc_second_order, scramble, skip_values, seed) 127 qrng.fast_forward(M) 128 elif skip_values < 0 or isinstance(skip_values, int): --> 129 raise ValueError("`skip_values` must be a positive integer.") 131 # sample Sobol' sequence 132 base_sequence = qrng.random(N) ValueError: `skip_values` must be a positive integer. ```
closed
2022-10-17T16:57:57Z
2022-10-17T22:57:56Z
https://github.com/SALib/SALib/issues/541
[]
ddebnath-nrel
1
httpie/cli
python
1,043
Consider joining efforts with xh in porting HTTPie to Rust
HTTPie is one of a few tools that I recommend to everyone. However, the language it is currently written in means it can suffer from slow startup and some occasional installation problems. I and other lovely contributors have been working on [porting HTTPie to Rust](https://github.com/ducaale/xh). And given that an [official Rust version of HTTPie](https://crates.io/crates/httpie) is now being planned, we would love to help in any way we can.
closed
2021-02-28T21:04:18Z
2021-04-02T15:18:36Z
https://github.com/httpie/cli/issues/1043
[]
ducaale
2
pyro-ppl/numpyro
numpy
1,694
bug in NeuTraReparam
Minimal example: ```python import jax import jax.numpy as jnp from jax.random import PRNGKey import numpyro import numpyro.distributions as dist from numpyro.infer import MCMC, NUTS, Trace_ELBO, SVI from numpyro.infer.reparam import NeuTraReparam from numpyro.infer.autoguide import AutoBNAFNormal n = 100 p = 10 # n_dim x q = 5 # n_dim y k = min(3, p, q) # n_dim latent X = dist.MultivariateNormal(jnp.zeros(p), jnp.eye(p, p)).sample(PRNGKey(0), (n,)) Y = dist.MultivariateNormal(jnp.zeros(q), jnp.eye(q, q)).sample(PRNGKey(1), (n,)) def model(X, Y=None): with numpyro.plate('_k', k): P_cov = numpyro.sample('P_cov', dist.InverseGamma(3, 1)) with numpyro.plate('_q', q): Q_cov = numpyro.sample('Q_cov', dist.InverseGamma(3, 1)) P_cov = P_cov * jnp.eye(k, k) Q_cov = Q_cov * jnp.eye(q, q) with numpyro.plate('p', p): P = numpyro.sample('P', dist.MultivariateNormal(jnp.zeros(k), P_cov)) with numpyro.plate('k', k): Q = numpyro.sample('Q', dist.MultivariateNormal(jnp.zeros(q), Q_cov)) with numpyro.plate('n', n): Z = X @ P # low rank representation of X Y_pred = Z @ Q # transform back into Y via Q return numpyro.sample('Y', dist.MultivariateNormal(Y_pred, jnp.eye(q, q)), obs=Y) # --- this works --- mcmc = MCMC(NUTS(model), num_warmup=50, num_samples=50) mcmc.run(jax.random.PRNGKey(2), X, Y) # --- this fails --- guide = AutoBNAFNormal(model, num_flows=1, hidden_factors=[8, 8]) svi = SVI(model, guide, numpyro.optim.Adam(0.003), Trace_ELBO()) svi_result = svi.run(jax.random.PRNGKey(3), 5_000, X, Y) neutra = NeuTraReparam(guide, svi_result.params) mcmc = MCMC(NUTS(neutra.reparam(model)), num_warmup=1_000, num_samples=3_000) mcmc.run(jax.random.PRNGKey(4), X, Y) ``` I'm not entirely sure what's going on here. The following model works with vanilla NUTS, but returns ```TypeError: mul got incompatible shapes for broadcasting: (3, 5), (5, 5)``` when trying to run NUTS after reparameterizing with NeuTraReparam. If I remove the top two plates and replace the latents with the constants ```python P_cov = jnp.eye(k, k) Q_cov = jnp.eye(q, q) ``` the code runs but I get the following warnings: ``` <ipython-input-17-7e6df362d6ff>:54: UserWarning: Missing a plate statement for batch dimension -2 at site '_P_log_prob'. You can use `numpyro.util.format_shapes` utility to check shapes at all sites of your model. mcmc.run(jax.random.PRNGKey(4), X, Y) <ipython-input-17-7e6df362d6ff>:54: UserWarning: Missing a plate statement for batch dimension -2 at site '_Q_log_prob'. You can use `numpyro.util.format_shapes` utility to check shapes at all sites of your model. mcmc.run(jax.random.PRNGKey(4), X, Y) <ipython-input-17-7e6df362d6ff>:54: UserWarning: Missing a plate statement for batch dimension -2 at site 'Y'. You can use `numpyro.util.format_shapes` utility to check shapes at all sites of your model. mcmc.run(jax.random.PRNGKey(4), X, Y) ``` Maybe it has something to do with having multiple plate names with the same dimension?
closed
2023-12-07T14:56:15Z
2024-07-02T10:52:47Z
https://github.com/pyro-ppl/numpyro/issues/1694
[ "bug" ]
amifalk
1
globaleaks/globaleaks-whistleblowing-software
sqlalchemy
3,549
"Skip to content" link appears left top
### What version of GlobaLeaks are you using? 4.12.2 ### What browser(s) are you seeing the problem on? Chrome ### What operating system(s) are you seeing the problem on? Windows ### Describe the issue "Skip to content" link appears very briefly left top when moving from landing page to the questionnaire. Not sure if this is a bug but it is a bit annoying and catches the eye of the user! ### Proposed solution _No response_
closed
2023-07-23T17:14:10Z
2023-07-28T05:23:00Z
https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3549
[ "T: Bug" ]
elbill
3
graphql-python/graphql-core
graphql
41
Support "out_name" in input types
GraphQL-Core supports an `out_name` for input types which is used by Graphene for passing parameters with transformed names to Python (because Python likes snake_case instead of camelCase). GraphQL-Core-Next should support a similar functionality that can be used by Graphene.
closed
2019-06-30T01:10:34Z
2019-09-04T22:03:36Z
https://github.com/graphql-python/graphql-core/issues/41
[ "feature" ]
Cito
4
aio-libs/aiomysql
sqlalchemy
707
Add type hints
see also https://github.com/aio-libs/aiopg/pull/813 for context managers
open
2022-01-30T02:01:20Z
2023-05-06T21:03:33Z
https://github.com/aio-libs/aiomysql/issues/707
[ "enhancement", "docs" ]
Nothing4You
4
seleniumbase/SeleniumBase
pytest
2,747
Improvements to highlighting elements
## Improvements to highlighting elements Allow `self.highlight(selector)` to also accept a `WebElement` for the selector. Add `self.highlight_elements(selector)` to highlight all elements that match the given selector. Summary: ```python self.highlight(selector, by="css selector", loops=4, scroll=True, timeout=None) # Accepts WebElement self.highlight_elements(selector, by="css selector", loops=4, scroll=True, limit=0) ```
closed
2024-05-03T17:41:44Z
2024-05-03T21:46:40Z
https://github.com/seleniumbase/SeleniumBase/issues/2747
[ "enhancement" ]
mdmintz
1
PaddlePaddle/PaddleHub
nlp
1,731
paddlehub参数parameters读取问题求帮助
![1](https://user-images.githubusercontent.com/73382519/145951503-d3bf62ac-9033-4b6f-b4d8-2f67f98c0be9.png) 我在paddlehub2.0上使用resnet50_vd_dishes训练自己的数据集时,遇到报错AttributeError: 'ResNet50vdDishes' object has no attribute 'parameters',网上搜索说是装载模型时使用的是未升级的模型。 ![2](https://user-images.githubusercontent.com/73382519/145951523-0df32db1-8574-483b-9154-2a6a52162183.png) 于是我将paddlehub降到1.7 ![3](https://user-images.githubusercontent.com/73382519/145951585-779bfea1-8db8-48e8-98d4-9e799bea473d.png) 依然出现如下的parameters参数问题,请问resnet50_vd_dishes这个模型的参数什么时候更新,有读取方法吗? ![4](https://user-images.githubusercontent.com/73382519/145951639-72211cfc-0d30-45f6-bdef-0868bc3b0476.png)
open
2021-12-14T07:22:28Z
2021-12-14T08:50:25Z
https://github.com/PaddlePaddle/PaddleHub/issues/1731
[ "cv" ]
sk8boi
2
explosion/spaCy
deep-learning
12,123
TFR inconsistent and wrong break in doc.sents iterator.
The issue here is that TFR breaks at odd places and inconsistently. When the text was short, it worked. I had a long paragraph that I expected it to parse into sentences. I have to use markers in the text to identify split points between XML tags. To do so, I used `<pad>` as a non-whitespace delimiter so I can reverse the transformation. For some reason (attention?), it will disconnect part of the tags and become `pad>`. The SM version seems to be working. White space doesn't matter, but those tags and other visible characters do. Ideally, I would like to just have a list of indices of where each sentence starts. I cannot fix the input and I've already done the work to create a reversible XML text extraction process. ## How to reproduce the behaviour ``` import spacy ## fails sometimes # nlp = spacy.load("en_core_web_trf") ## works # nlp = spacy.load("en_core_web_sm") def get_sentences(text): doc = nlp(text) return [sent for sent in doc.sents if True] line = '-'*100 ## fails text = 'Word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word. The Arbitration Act 1996 will apply. [The person appointed will act as expert and not as arbitrator.] <pad> [the [exclusive OR non exclusive] jurisdiction of the courts of England and Wales.] <pad> ' ## works text = 'The Arbitration Act 1996 will apply. [The person appointed will act as expert and not as arbitrator.] <pad> [the [exclusive OR non exclusive] jurisdiction of the courts of England and Wales.] <pad> ' for out in get_sentences(text): print(f'{line}\n{out}\n') ``` Output 1: ``` ---------------------------------------------------------------------------------------------------- Word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word. ---------------------------------------------------------------------------------------------------- The Arbitration Act 1996 will apply. ---------------------------------------------------------------------------------------------------- [The person appointed will act as expert and not as arbitrator.] ---------------------------------------------------------------------------------------------------- < ---------------------------------------------------------------------------------------------------- pad> ---------------------------------------------------------------------------------------------------- [the [exclusive OR non exclusive] jurisdiction of the courts of England and Wales.] ---------------------------------------------------------------------------------------------------- < ---------------------------------------------------------------------------------------------------- pad> ---------------------------------------------------------------------------------------------------- ``` Output 2: ``` ---------------------------------------------------------------------------------------------------- The Arbitration Act 1996 will apply. ---------------------------------------------------------------------------------------------------- [The person appointed will act as expert and not as arbitrator.] <pad> ---------------------------------------------------------------------------------------------------- [the [exclusive OR non exclusive] jurisdiction of the courts of England and Wales.] <pad> ---------------------------------------------------------------------------------------------------- ``` ## Your Environment * Operating System: Mac M1 2022 * Python Version Used: Python 3.9.12 * spaCy Version Used: just updated today * Environment Information:
closed
2023-01-18T22:10:42Z
2023-01-23T09:25:23Z
https://github.com/explosion/spaCy/issues/12123
[ "feat / parser", "feat / tokenizer" ]
ldmtwo
3
ClimbsRocks/auto_ml
scikit-learn
401
Logistic Regression does not work
Running the Boston housing example with: ml_predictor.train(df_boston_train,model_names=['LogisticRegression']) I get the following error messages: File "C:\Users\Anaconda3\lib\site-packages\auto_ml\predictor.py", line 670, in train self.trained_final_model = self.train_ml_estimator(self.model_names, self._scorer, X_df, y) File "C:\Users\Anaconda3\lib\site-packages\auto_ml\predictor.py", line 1236, in train_ml_estimator trained_final_model = self.fit_single_pipeline(X_df, y, estimator_names[0], feature_learning=feature_learning, prediction_interval=False) File "C:\Users\Anaconda3\lib\site-packages\auto_ml\predictor.py", line 875, in fit_single_pipeline ppl.fit(X_df, y) File "C:\Users\Anaconda3\lib\site-packages\auto_ml\utils_model_training.py", line 297, in fit self.model.fit(X_fit, y) File "C:\Users\Anaconda3\lib\site-packages\sklearn\linear_model\logistic.py", line 1217, in fit check_classification_targets(y) File "C:\Users\Anaconda3\lib\site-packages\sklearn\utils\multiclass.py", line 172, in check_classification_targets raise ValueError("Unknown label type: %r" % y_type) ValueError: Unknown label type: 'continuous'
open
2018-05-24T17:07:05Z
2018-08-03T07:31:48Z
https://github.com/ClimbsRocks/auto_ml/issues/401
[]
avdusen
1
Zeyi-Lin/HivisionIDPhotos
machine-learning
228
裁剪线
排版接口中能否加入裁剪线参数,针对白底相片方便裁剪
closed
2025-01-08T02:48:54Z
2025-01-09T03:45:58Z
https://github.com/Zeyi-Lin/HivisionIDPhotos/issues/228
[]
panda-hub102
0
open-mmlab/mmdetection
pytorch
11,282
The number of boxes displayed is different from the value returned by bbox
Thanks for your error report and we appreciate it a lot. **Checklist** 1. I have searched related issues but cannot get the expected help. 2. I have read the [FAQ documentation](https://mmdetection.readthedocs.io/en/latest/faq.html) but cannot get the expected help. 3. The bug has not been fixed in the latest version. **Describe the bug** draw_pred=True shows 50 boxes, but the bbox that returns the value has 50 more results (say 55), and I use OpenCV to display the images in the check box individually to show that there are indeed 55 **Reproduction** 1. What command or script did you run? inferencer = DetInferencer(model=r'work_dirs\full\yolox_l_8xb8-300e_coco\yolox_l_8xb8-300e_coco.py', weights=r'work_dirs\full\yolox_l_8xb8-300e_coco\epoch_300.pth', device='cuda', palette=(0,255,0)) ```none A placeholder for the command. ``` 2. Did you make any modifications on the code or config? Did you understand what you have modified? modify 4. What dataset did you use? **Environment** 1. Please run `python mmdet/utils/collect_env.py` to collect necessary environment information and paste it here. 2. You may add addition that may be helpful for locating the problem, such as - How you installed PyTorch \[e.g., pip, conda, source\] - Other environment variables that may be related (such as `$PATH`, `$LD_LIBRARY_PATH`, `$PYTHONPATH`, etc.) **Error traceback** If applicable, paste the error trackback here. ```none A placeholder for trackback. ``` **Bug fix** If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!
closed
2023-12-14T03:47:21Z
2023-12-26T10:20:00Z
https://github.com/open-mmlab/mmdetection/issues/11282
[]
CyanMystery
0
drivendataorg/cookiecutter-data-science
data-science
25
Include nosetests out of the box with top level testing dir
One of the main components that is different from my usual data science set-up is a top-level directory for unit and integration testing. Once a model moves to production, it is vital that it ship with unit and integration tests and insurance that the work does not break any other models. I recommend adding this section at the top level of the module so that forked projects can run the testing suite with access to all the proper sub-modules. Great work; I appreciate the organization!
closed
2016-05-16T15:05:30Z
2019-04-15T11:44:39Z
https://github.com/drivendataorg/cookiecutter-data-science/issues/25
[]
denisekgosnell
4
coqui-ai/TTS
pytorch
2,605
[Bug] Not building on Arch linux due to python 3.11.3 version
### Describe the bug Install scripts are to old for 3.11.3 python version, breaking the make install ### To Reproduce install python 3.11.3 and run make install ### Expected behavior should install ### Logs ```shell pip install -e .[all] Defaulting to user installation because normal site-packages is not writeable Obtaining file:///... Installing build dependencies ... error error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─> [9 lines of output] Collecting setuptools Using cached setuptools-67.7.2-py3-none-any.whl (1.1 MB) Collecting wheel Using cached wheel-0.40.0-py3-none-any.whl (64 kB) Collecting cython==0.29.28 Using cached Cython-0.29.28-py2.py3-none-any.whl (983 kB) ERROR: Ignored the following versions that require a different python version: 1.21.2 Requires-Python >=3.7,<3.11; 1.21.3 Requires-Python >=3.7,<3.11; 1.21.4 Requires-Python >=3.7,<3.11; 1.21.5 Requires-Python >=3.7,<3.11; 1.21.6 Requires-Python >=3.7,<3.11 ERROR: Could not find a version that satisfies the requirement numpy==1.21.6 (from versions: 1.3.0, 1.4.1, 1.5.0, 1.5.1, 1.6.0, 1.6.1, 1.6.2, 1.7.0, 1.7.1, 1.7.2, 1.8.0, 1.8.1, 1.8.2, 1.9.0, 1.9.1, 1.9.2, 1.9.3, 1.10.0.post2, 1.10.1, 1.10.2, 1.10.4, 1.11.0, 1.11.1, 1.11.2, 1.11.3, 1.12.0, 1.12.1, 1.13.0, 1.13.1, 1.13.3, 1.14.0, 1.14.1, 1.14.2, 1.14.3, 1.14.4, 1.14.5, 1.14.6, 1.15.0, 1.15.1, 1.15.2, 1.15.3, 1.15.4, 1.16.0, 1.16.1, 1.16.2, 1.16.3, 1.16.4, 1.16.5, 1.16.6, 1.17.0, 1.17.1, 1.17.2, 1.17.3, 1.17.4, 1.17.5, 1.18.0, 1.18.1, 1.18.2, 1.18.3, 1.18.4, 1.18.5, 1.19.0, 1.19.1, 1.19.2, 1.19.3, 1.19.4, 1.19.5, 1.20.0, 1.20.1, 1.20.2, 1.20.3, 1.21.0, 1.21.1, 1.22.0, 1.22.1, 1.22.2, 1.22.3, 1.22.4, 1.23.0rc1, 1.23.0rc2, 1.23.0rc3, 1.23.0, 1.23.1, 1.23.2, 1.23.3, 1.23.4, 1.23.5, 1.24.0rc1, 1.24.0rc2, 1.24.0, 1.24.1, 1.24.2, 1.24.3) ERROR: No matching distribution found for numpy==1.21.6 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. make: *** [Makefile:69: install] Error 1 ``` ### Environment ```shell python 3.11.3 ``` ### Additional context _No response_
closed
2023-05-09T13:30:56Z
2023-06-30T12:01:31Z
https://github.com/coqui-ai/TTS/issues/2605
[ "bug" ]
c-seeger
6
sqlalchemy/alembic
sqlalchemy
1,063
Alembic drops and creates foreign key constraints (autogenerate) when the table has different schema than the referenced columns
I am trying to use the autogenerate feature from alembic to update the schema of a MSSQL database. ```lang-python class Table1(Base): __tablename__ = "table1" __table_args__ = ( PrimaryKeyConstraint("ID", name="PK_Table1"), ) ID = Column(Integer, nullable=False) class Table2(Base): __tablename__ = "table2" __table_args__ = ( ForeignKeyConstraint(['Table1ID'], ['Table1.ID'], name='fk_Table2_Table1') {'schema': 'foo'} ) Table1ID = Column(Integer, nullable=False) Table1_ = relationship('Table1', back_populates='Table2') ``` After executing the command `alembic revision --autogenerate`, this is the `upgrade()` function I get: ```lang-python def upgrade(): # ### commands auto generated by Alembic - please adjust! ### op.drop_constraint('fk_Table2_Table1', 'Table2', schema='foo', type_='foreignkey') op.create_foreign_key('fk_Table2_Table1', 'Table2', 'Table1', ['Table1ID'], ['ID'], source_schema='foo') # ### end Alembic commands ### ``` After digging into the code, I found that when alembic compares the foreign keys between the schema and the model in the code, sqlalchemy forces a `dbo` schema to the Table1.ID referenced column: Database foreign key: ('foo', 'Table2', ('Table1ID',), 'dbo', 'Table1', ('ID',), None, None, 'not deferrable') Model foreign key: ('foo', 'Table2', ('Table1ID',), None, 'Table1', ('ID',), None, None, 'not deferrable') This difference leads to the drop and create commands in the `upgrade()` function later on. If I remove the `{'schema': 'foo'}` in `__table_args__` the issue disappears so my guess is that the table schema (different from the default one) forces the schema on the foreign key reference. I would like to know if there is any way to overcome this problem.
closed
2022-07-12T07:32:38Z
2022-07-12T18:34:51Z
https://github.com/sqlalchemy/alembic/issues/1063
[]
jrafols-imogate
0
pennersr/django-allauth
django
3,985
send real Email with docker in ubuntu server
I have VPS virtual Private Server and run on that docker django project How can I send Email with my ubuntu server have docker running django project ? if need SMTP service , how can I integrate that with django-allauth package ? please help Thanks so much
closed
2024-07-21T19:14:08Z
2024-07-21T19:27:56Z
https://github.com/pennersr/django-allauth/issues/3985
[]
sinalalebakhsh
0
quokkaproject/quokka
flask
604
fix some pelican unrendered footers
Some pelican themes cant render footers, fix it by overrride jinja loader https://github.com/rochacbruno/quokka_ng/issues/67
open
2018-02-07T01:44:42Z
2018-02-07T01:44:42Z
https://github.com/quokkaproject/quokka/issues/604
[ "1.0.0", "hacktoberfest" ]
rochacbruno
0
KaiyangZhou/deep-person-reid
computer-vision
63
[bug] 'best_rank1' and 'best_epoch' may be wrong when resuming from a saved model
After user resumed from a saved model, the value of `best_rank1` and `best_epoch` may be wrong (The wrong values will be recorded in the log). And in some cases, it may cause previously saved best model (`best_model.pth.tar`) to be falsely overwritten during the subsequent training process. ### Fix https://github.com/KaiyangZhou/deep-person-reid/pull/62 ### Warning this fix is **incompatible** with saved models created by old versions.
closed
2018-09-29T13:54:31Z
2018-09-29T21:52:24Z
https://github.com/KaiyangZhou/deep-person-reid/issues/63
[]
ghost
1
SciTools/cartopy
matplotlib
1,946
Position shift in plotting data defined by SouthPolarStereo coordinate system
### Description <!-- Please provide a general introduction to the issue/proposal. --> When plotting data defined by EPSG:3031 (WGS 84 / Antarctic Polar Stereographic), the positions will be shifted. They are output at the x-y coordinate about 0.9972 (= 1 / 1.028) times smaller than appropriate values. It seems that the all points are approaching the south pole with the same scale factor. <!-- If you are reporting a bug, attach the *entire* traceback from Python. If you are proposing an enhancement/new feature, provide links to related articles, reference examples, etc. If you are asking a question, please ask on StackOverflow and use the cartopy tag. All cartopy questions on StackOverflow can be found at https://stackoverflow.com/questions/tagged/cartopy --> #### Code to reproduce ```python import cartopy.crs as ccrs import matplotlib.pyplot as plt import pyproj as proj4 from pyproj import Transformer epsg4326 = proj4.CRS.from_epsg(4326) # Lat Lon WGS84 epsg3031 = proj4.CRS.from_epsg(3031) # SouthPolarStereo WGS84 transformer = Transformer.from_crs(epsg4326, epsg3031) point_latlon = [-70, 40] point_xy = transformer.transform(point_latlon[0], point_latlon[1]) fig = plt.figure() ax1 = fig.add_subplot(121, projection=ccrs.SouthPolarStereo()) ax1.set_extent([35,45,-71, -69], ccrs.PlateCarree()) ax1.scatter(point_latlon[1], point_latlon[0], transform = ccrs.PlateCarree(), label = "LatLon") ax1.scatter(point_xy[0], point_xy[1], transform = ccrs.SouthPolarStereo(), label = "XY") ax1.coastlines() ax1.gridlines(draw_labels = True) ax1.legend() ax2 = fig.add_subplot(122, projection=ccrs.SouthPolarStereo()) ax2.set_extent([35,45,-71, -69], ccrs.PlateCarree()) ax2.scatter(point_latlon[1], point_latlon[0], transform = ccrs.PlateCarree(), label = "LatLon") ax2.scatter(1.028 * point_xy[0], 1.028 * point_xy[1], transform = ccrs.SouthPolarStereo(), label = "XY") ax2.coastlines() ax2.gridlines(draw_labels = True) ax2.legend() ``` ![test1](https://user-images.githubusercontent.com/41944544/143227689-f7a8e4ce-0784-4492-8a62-8d8341c50bdd.png) #### Traceback ``` ``` <details> <summary>Full environment definition</summary> <!-- fill in the following information as appropriate --> ### Operating system Ubuntu 20.04 ### Cartopy version 0.20.1 ### conda list ``` ``` ### pip list ``` ``` </details>
closed
2021-11-24T11:20:25Z
2024-04-14T03:12:14Z
https://github.com/SciTools/cartopy/issues/1946
[]
HattoriAkihisa
2
keras-team/autokeras
tensorflow
1,316
Task specific tuner for structured data classifier
### Feature Description ### Code Example <!--- Please provide a code example for using that feature given the proposed feature is implemented. --> ```python ``` ### Reason <!--- Why do we need the feature? --> ### Solution <!--- Please tell us how to implement the feature, if you have one in mind. -->
closed
2020-08-27T02:54:55Z
2020-08-28T19:44:12Z
https://github.com/keras-team/autokeras/issues/1316
[ "feature request", "pinned" ]
haifeng-jin
0
litl/backoff
asyncio
126
Changelog for 1.11.0 is missing
Hey, I just seen that you release 1.11.0 but the CHANGELOG.md doesn't contain such an entry
closed
2021-07-13T05:34:32Z
2021-07-14T05:39:23Z
https://github.com/litl/backoff/issues/126
[]
kasium
1
pallets/flask
python
5,392
Invalid `SERVER_NAME` + `url_for` leads to `AttributeError`
In our environment we have some security scanners running which generate artificial HTTP requests. Since they are closed source and I can't generate these calls with other tools, I created the below example which starts at the flask level and assumes that an invalid server passed thru the levels. ```py from flask import Flask, url_for app = Flask(__name__) @app.errorhandler(400) def do_400(e): url_for("foo") return "error", 400 res = app( { "ACTUAL_SERVER_PROTOCOL": "HTTP/1.1", "PATH_INFO": "/", "QUERY_STRING": "", "REQUEST_METHOD": "GET", "REQUEST_URI": "/", "SCRIPT_NAME": "", "SERVER_NAME": "foobar/..", "SERVER_PROTOCOL": "HTTP/1.1", "wsgi.url_scheme": "https", }, lambda x, y: None, ) ``` Error ``` Traceback (most recent call last): File "work/venv/lib/python3.12/site-packages/werkzeug/routing/map.py", line 258, in bind server_name = server_name.encode("idna").decode("ascii") ^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".pyenv/versions/3.12.0/lib/python3.12/encodings/idna.py", line 173, in encode raise UnicodeError("label empty or too long") UnicodeError: label empty or too long encoding with 'idna' codec failed The above exception was the direct cause of the following exception: Traceback (most recent call last): File "work/venv/lib/python3.12/site-packages/flask/app.py", line 1484, in full_dispatch_request rv = self.dispatch_request() ^^^^^^^^^^^^^^^^^^^^^^^ File "work/venv/lib/python3.12/site-packages/flask/app.py", line 1458, in dispatch_request self.raise_routing_exception(req) File "work/venv/lib/python3.12/site-packages/flask/app.py", line 1440, in raise_routing_exception raise request.routing_exception # type: ignore ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "work/venv/lib/python3.12/site-packages/flask/ctx.py", line 316, in __init__ self.url_adapter = app.create_url_adapter(self.request) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "work/venv/lib/python3.12/site-packages/flask/app.py", line 1883, in create_url_adapter return self.url_map.bind_to_environ( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "work/venv/lib/python3.12/site-packages/werkzeug/routing/map.py", line 371, in bind_to_environ return Map.bind( ^^^^^^^^^ File "work/venv/lib/python3.12/site-packages/werkzeug/routing/map.py", line 260, in bind raise BadHost() from e werkzeug.exceptions.BadHost: 400 Bad Request: The browser (or proxy) sent a request that this server could not understand. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "work/venv/lib/python3.12/site-packages/flask/app.py", line 2190, in wsgi_app response = self.full_dispatch_request() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "work/venv/lib/python3.12/site-packages/flask/app.py", line 1486, in full_dispatch_request rv = self.handle_user_exception(e) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "work/venv/lib/python3.12/site-packages/flask/app.py", line 1341, in handle_user_exception return self.handle_http_exception(e) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "work/venv/lib/python3.12/site-packages/flask/app.py", line 1281, in handle_http_exception return self.ensure_sync(handler)(e) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "work/foo.py", line 9, in do_400 url_for("foo") File "work/venv/lib/python3.12/site-packages/flask/helpers.py", line 225, in url_for return current_app.url_for( ^^^^^^^^^^^^^^^^^^^^ File "work/venv/lib/python3.12/site-packages/flask/app.py", line 1686, in url_for rv = url_adapter.build( # type: ignore[union-attr] ^^^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'build' ``` As you can see, the server_name passed to flask is invalid. This should normally not happen, but let's just assume that it might happen. In my case I use cheroot and it let's it thru. Then the server name should be encoded with IDNA which fails and that is fine. However, then the custom error handler for 400 is called and in my case it contains a `url_for` call. This call then fails with an `AttributeError` because the request was never fully consumed by flask. Even if this seems now a little bit artificial, I would like to ask, if flask could improve its handling here, I have the following ideas 1. Don't call custom error handlers when such an error occurs 2. Have a nicer exception than `AttributeError` so that the caller of `url_for` can handle it gracefully 3. Handle a `None` `url_adapter` in `url_for` Environment: - Python version: 3.12.0 - Flask version: 2.3.3 - Werkzeug: 2.3.8 _This error also happens with flask/werkzeug 3.0_
closed
2024-01-24T14:41:08Z
2024-04-15T01:16:39Z
https://github.com/pallets/flask/issues/5392
[]
kasium
5
plotly/dash-table
plotly
931
Filter datatable based on absolute value
It would be great to have a feature to filter the datatable based on the absolute value. Maybe something like this: `absolute{value}` A similar request can be found in the community: https://community.plotly.com/t/filter-based-on-absolute-value/38746 (a workaround would be to use two `if` statements)
open
2021-07-30T14:07:53Z
2022-06-10T18:38:42Z
https://github.com/plotly/dash-table/issues/931
[]
RunQi-Han
1
pallets-eco/flask-sqlalchemy
sqlalchemy
1,186
Add `db.Model` typing workaround in docs?
mypy currently does not correctly handle inheritance from a class attribute with a model - `db.Model`: ``` error: Name "db.Model" is not defined [name-defined] ``` According on the [related bugreport](https://github.com/python/mypy/issues/8603), this issue will not be resolved anytime soon. So, there is working workaround: ```python import typing as t from flask_sqlalchemy import SQLAlchemy from flask_sqlalchemy.model import Model as BaseModel db = SQLAlchemy() if t.TYPE_CHECKING: class Model(BaseModel): pass else: Model = db.Model class MyModel(Model): pass ``` It will be useful if this workaround will be mentioned in documentation. I can make changes to the documentation myself if I get the approval.
closed
2023-04-03T11:31:42Z
2023-04-03T15:29:00Z
https://github.com/pallets-eco/flask-sqlalchemy/issues/1186
[]
TitaniumHocker
2
ahmedfgad/GeneticAlgorithmPython
numpy
318
selection of features and hyperparameters of the model
How to use feature selection together with hyperparameter selection of a classifier model? ``` gene_space = [ {'low': 1, 'high': 200, 'step': 3}, # n_estimators {'low': 1, 'high': 7, 'step': 1}, # depth # Feature selection (binary vector for each feature) *([0, 1] * (n_features // 2)) #*[random.randint(0, 1) for _ in range(n_features)] ] ```
open
2025-03-10T14:01:41Z
2025-03-12T09:10:18Z
https://github.com/ahmedfgad/GeneticAlgorithmPython/issues/318
[]
quant12345
0
tatsu-lab/stanford_alpaca
deep-learning
53
Confusion about input ids
Hi, thanks for sharing such a great job. I've read your fine-tuning code and I'm a little confused about the inputs of the model. From the code, the Input of model should be, here's an, example: ### # Instruction: {instruction}### Input{input}### Response:{response}. so the input_ids: tokenizer(example), label_ids:tokenizer(example), and label_ids[:len(source_len)]=IGNORE_INDEX. I would like to ask, why do input ids contain response token ids? So the data target won't leak? I am looking forward to your reply. Thank you very much.
closed
2023-03-16T10:31:49Z
2023-03-16T16:22:55Z
https://github.com/tatsu-lab/stanford_alpaca/issues/53
[]
fuxuliu
1
marcomusy/vedo
numpy
651
plotting a matlibplot contour figure inside vedo plotter
hello marcomusy, thank you again for your time! is it possible to plot a static 2D countour figure from matplotlib inside the vedo plotter like this ![image](https://user-images.githubusercontent.com/66644985/169707302-2a5cbd1d-ab2d-4ddf-bc3e-96068f5fe719.png)
closed
2022-05-22T17:14:28Z
2023-10-18T13:20:16Z
https://github.com/marcomusy/vedo/issues/651
[ "enhancement", "fixed" ]
Amin-Fakia
1
holoviz/panel
matplotlib
7,280
Links between reference notebooks broken when converted to docs
In a notebook like ChatMessage.ipynb you have a link to another notebook \[`ChatFeed`\]\(ChatFeed.ipynb\). ![image](https://github.com/user-attachments/assets/7fcfe9e5-2ff3-48cc-b6d9-9062f34377e2) This link works when using the notebook as a notebook. But when the notebooks are converted to documentation this no longer works. See https://holoviz-dev.github.io/panel/reference/chat/ChatMessage.html where the link becomes `#ChatFeed.ipynb` when it should have become something like `./ChatFeed.html`. I would suggest we add a script to handle this during the conversion from notebook to markdown.
closed
2024-09-15T08:55:15Z
2024-09-16T09:55:05Z
https://github.com/holoviz/panel/issues/7280
[ "type: docs" ]
MarcSkovMadsen
1
hankcs/HanLP
nlp
1,329
import pyhanlp网络报错
<!-- 注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。 --> ## 注意事项 请确认下列注意事项: * 我已仔细阅读下列文档,都没有找到答案: - [首页文档](https://github.com/hankcs/HanLP) - [wiki](https://github.com/hankcs/HanLP/wiki) - [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ) * 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。 * 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。 * [ ] 我在此括号内输入x打钩,代表上述事项确认完毕。 ## 版本号 <!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 --> 当前最新版本号是: 我使用的版本是: <!--以上属于必填项,以下可自由发挥--> ## 我的问题 <!-- 请详细描述问题,越详细越可能得到解决 --> ## 复现问题 <!-- 你是如何操作导致产生问题的?比如修改了代码?修改了词典或模型?--> ### 步骤 1. 首先…… 2. 然后…… 3. 接着…… ### 触发代码 ``` public void testIssue1234() throws Exception { CustomDictionary.add("用户词语"); System.out.println(StandardTokenizer.segment("触发问题的句子")); } ``` ### 期望输出 <!-- 你希望输出什么样的正确结果?--> ``` 期望输出 ``` ### 实际输出 <!-- HanLP实际输出了什么?产生了什么效果?错在哪里?--> ``` 实际输出 ``` ## 其他信息 <!-- 任何可能有用的信息,包括截图、日志、配置文件、相关issue等等。-->
closed
2019-11-21T06:23:36Z
2020-01-01T10:48:14Z
https://github.com/hankcs/HanLP/issues/1329
[ "ignored" ]
Leputa
4
cookiecutter/cookiecutter-django
django
4,990
Discord invite expired
The discord server invite in your readme file is expired. Have a good day!
closed
2024-04-12T18:00:11Z
2024-04-16T18:27:01Z
https://github.com/cookiecutter/cookiecutter-django/issues/4990
[ "bug" ]
jpdborgna
1
lepture/authlib
django
187
Unable to decode ID Token with unicode characters - Part 1
**Describe the bug** We use Okta to authenticate. We have employees in Beijing that include unicode characters in their profiles. For a few employees the exchange of authentication code for tokens fails due to errors parsing the returned ID Tokens. The first consequence that we experience is an 'Incorrect Padding' error which comes from the urlsafe_b64decode function while parsing the signature. We've gotten around that error by including a couple of '=' signs at the end of the signature. Attached is [parse_token.py.zip](https://github.com/lepture/authlib/files/4149452/parse_token.py.zip) which illustrates this issue and our work-around. Since the tokens themselves contain some personally identifiable information which I cannot post to the general list, please contact me separately and I can get you a few examples. **Error Stacks** ``` Traceback (most recent call last): File "/Users/tom.gardner/Projects/parse-error/parse_token.py", line 44, in <module> signature = urlsafe_b64decode(signature) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/base64.py", line 133, in urlsafe_b64decode return b64decode(s) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/base64.py", line 87, in b64decode return binascii.a2b_base64(s) binascii.Error: Incorrect padding ``` **To Reproduce** Run the attached python application with Python 2.7 **Expected behavior** The application will fail with an Incorrect padding error. **Environment:** - OS: MacOS - Python Version: 2.7 - Authlib Version: Latest version in Git **Additional context** You'll have to contact me to get an ID Token to use.
closed
2020-02-03T18:34:56Z
2021-05-25T04:02:21Z
https://github.com/lepture/authlib/issues/187
[ "bug" ]
TomAtHulu
8
miguelgrinberg/Flask-Migrate
flask
529
No such command 'db'
(venv) M:\PROJETS\Python\Projects\Rendermap_Server>flask db init Usage: flask [OPTIONS] COMMAND [ARGS]... Try 'flask --help' for help. Error: No such command 'db'. Hello i'm new to this i've install Flask-Migrate and Flask but i get this error i've tried everything i dont know what to do ? this is my pip list : (venv) M:\PROJETS\Python\Projects\Rendermap_Server>pip list Package Version ----------------- ------- alembic 1.12.0 blinker 1.6.3 click 8.1.7 colorama 0.4.6 Flask 3.0.0 Flask-Migrate 4.0.5 Flask-SQLAlchemy 3.1.1 greenlet 3.0.0 itsdangerous 2.1.2 Jinja2 3.1.2 Mako 1.2.4 MarkupSafe 2.1.3 pip 23.2.1 psycopg2-binary 2.9.9 setuptools 68.2.0 SQLAlchemy 2.0.22 typing_extensions 4.8.0 Werkzeug 3.0.0 wheel 0.41.2 migration.py from flask import Flask from flask_sqlalchemy import SQLAlchemy from flask_migrate import Migrate app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql://myuser:password@localhost:5432/Rendermap_DB' db = SQLAlchemy(app) migrate = Migrate(app, db) app.py from flask import Flask, jsonify, request from models import db, User, Layouts import config app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = config.DATABASE_URI db.init_app(app) and i've set $env:FLASK_APP = "app.py" and if I downgrate Flask : (venv) M:\PROJETS\Python\Projects\Rendermap_Server>flask db init Traceback (most recent call last): File "M:\PROJETS\Python\pyver\3.10.5\lib\runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "M:\PROJETS\Python\pyver\3.10.5\lib\runpy.py", line 86, in _run_code exec(code, run_globals) File "M:\PROJETS\Python\Projects\Rendermap_Server\venv\Scripts\flask.exe\__main__.py", line 4, in <module> File "M:\PROJETS\Python\Projects\Rendermap_Server\venv\lib\site-packages\flask\__init__.py", line 7, in <module> from .app import Flask as Flask File "M:\PROJETS\Python\Projects\Rendermap_Server\venv\lib\site-packages\flask\app.py", line 28, in <module> from . import cli File "M:\PROJETS\Python\Projects\Rendermap_Server\venv\lib\site-packages\flask\cli.py", line 18, in <module> from .helpers import get_debug_flag File "M:\PROJETS\Python\Projects\Rendermap_Server\venv\lib\site-packages\flask\helpers.py", line 16, in <module> from werkzeug.urls import url_quote ImportError: cannot import name 'url_quote' from 'werkzeug.urls' (M:\PROJETS\Python\Projects\Rendermap_Server\venv\lib\site-packages\werkzeug\urls.py) and by downgrading Flask-Migrate (venv) M:\PROJETS\Python\Projects\Rendermap_Server>flask db init Traceback (most recent call last): File "M:\PROJETS\Python\pyver\3.10.5\lib\runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "M:\PROJETS\Python\pyver\3.10.5\lib\runpy.py", line 86, in _run_code exec(code, run_globals) File "M:\PROJETS\Python\Projects\Rendermap_Server\venv\Scripts\flask.exe\__main__.py", line 7, in <module> File "M:\PROJETS\Python\Projects\Rendermap_Server\venv\lib\site-packages\flask\cli.py", line 1064, in main cli.main() File "M:\PROJETS\Python\Projects\Rendermap_Server\venv\lib\site-packages\click\core.py", line 1078, in main rv = self.invoke(ctx) File "M:\PROJETS\Python\Projects\Rendermap_Server\venv\lib\site-packages\click\core.py", line 1688, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "M:\PROJETS\Python\Projects\Rendermap_Server\venv\lib\site-packages\click\core.py", line 1688, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "M:\PROJETS\Python\Projects\Rendermap_Server\venv\lib\site-packages\click\core.py", line 1434, in invoke return ctx.invoke(self.callback, **ctx.params) File "M:\PROJETS\Python\Projects\Rendermap_Server\venv\lib\site-packages\click\core.py", line 783, in invoke return __callback(*args, **kwargs) File "M:\PROJETS\Python\Projects\Rendermap_Server\venv\lib\site-packages\click\decorators.py", line 33, in new_func return f(get_current_context(), *args, **kwargs) File "M:\PROJETS\Python\Projects\Rendermap_Server\venv\lib\site-packages\flask\cli.py", line 358, in decorator return __ctx.invoke(f, *args, **kwargs) File "M:\PROJETS\Python\Projects\Rendermap_Server\venv\lib\site-packages\click\core.py", line 783, in invoke return __callback(*args, **kwargs) File "M:\PROJETS\Python\Projects\Rendermap_Server\venv\lib\site-packages\flask_migrate\cli.py", line 45, in init _init(directory, multidb, template, package) File "M:\PROJETS\Python\Projects\Rendermap_Server\venv\lib\site-packages\flask_migrate\__init__.py", line 98, in wrapped f(*args, **kwargs) File "M:\PROJETS\Python\Projects\Rendermap_Server\venv\lib\site-packages\flask_migrate\__init__.py", line 122, in init directory = current_app.extensions['migrate'].directory KeyError: 'migrate'
closed
2023-10-17T19:46:07Z
2023-10-17T22:11:06Z
https://github.com/miguelgrinberg/Flask-Migrate/issues/529
[]
max-anders
0
Netflix/metaflow
data-science
1,673
Cardview on WSL error
The demo code is copied from docs: from metaflow import FlowSpec, step, card class HelloFlow(FlowSpec): @card @step def start(self): print("HelloFlow is starting") self.next(self.hello) @step def hello(self): print("Say hi") self.next(self.end) @step def end(self): print("HelloFLow is ending") if __name__ == "__main__": HelloFlow() In windows WSL, run metaflow python code works, but can't start card view with `python demo.py card view start`, output looks like this: Start : ���ڳ������´����޷����д�����: ϵͳ�Ҳ���ָ�����ļ����� ����λ�� ��:1 �ַ�: 1 + Start "file:///home/jucheng/proj/metaflow_demo/metaflow_card_cache/He ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidOperation: (:) [Start-Process]��InvalidOperationException + FullyQualifiedErrorId : InvalidOperationException,Microsoft.PowerShell.Commands.StartProcessCommand
closed
2024-01-10T06:03:12Z
2024-01-25T13:09:43Z
https://github.com/Netflix/metaflow/issues/1673
[]
wolvever
2
public-apis/public-apis
api
3,626
Nc
closed
2023-09-07T08:18:39Z
2023-09-08T03:11:38Z
https://github.com/public-apis/public-apis/issues/3626
[]
raxxelz
0
scikit-optimize/scikit-optimize
scikit-learn
764
Sensitivity to scale of target function
First, I apologise if this isn't the forum for what's probably a usage problem than a bug but I couldn't find anywhere else to ask a question like this. I'm experimenting with Bayesian optimisation to optimise model parameters using a simulation code that takes ~10-60 minutes per evaluation. Modulo a small amount of (deterministic) numerical noise, I expect the target function to roughly be a correlated χ², so I've been testing `scikit-optimize` on a toy problem before I invest effort in the full-blown thing. As a simple example, this works reasonably well: ```Python import numpy as np import matplotlib.pyplot as pl # most people use plt but I'm a monster import skopt from skopt import plots space = np.array([[-2.,2.]])*np.ones((5,1)) A = np.random.rand(5,5) A = A.dot(A.T) # guarantees positive definite A def f(x): return np.dot(x, A.dot(x)) res = skopt.gp_minimize(f, space, verbose=True, n_calls=50) ax = plots.plot_objective(res) ``` ![Screenshot from 2019-05-17 15-28-52](https://user-images.githubusercontent.com/20858744/57934899-827c2380-78b8-11e9-9c12-80fa33870309.png) If I rescale the covariance matrix, however, things don't go so well... ```Python A *= 100 res = skopt.gp_minimize(f, space, verbose=True, n_calls=50) ax = plots.plot_objective(res) ``` ![Screenshot from 2019-05-17 15-30-28](https://user-images.githubusercontent.com/20858744/57935006-bb1bfd00-78b8-11e9-8dfc-5b1c751dd283.png) Although I think I understand the principles of Gaussian Processes, I'm still a beginner. I expect that there are some prudent parameter choices that would resolve this but I haven't managed to find them in the documentation or any of the presentations/talks I've found about Bayesian optimisation.
open
2019-05-17T14:35:10Z
2020-01-09T16:58:24Z
https://github.com/scikit-optimize/scikit-optimize/issues/764
[]
warrickball
2
ets-labs/python-dependency-injector
asyncio
445
Using with Class Based Views FastAPI
Hi! Please, if possible, help me using DI with Class Based Views FastAPI. https://fastapi-utils.davidmontague.xyz/user-guide/class-based-views/ Спасибо!
open
2021-04-15T05:28:21Z
2021-05-14T12:25:03Z
https://github.com/ets-labs/python-dependency-injector/issues/445
[]
ShvetsovYura
1
SciTools/cartopy
matplotlib
2,123
tile cache not updated for different tile style
### Description Cached tile data is not updated when a different style of the same tile source is used. It should be updated. This could be achieved if the style was included in the cache dir name. In the example below, this could achieved if data were store in `/tmp/cartopy_cache/Stamen_terrain` and `/tmp/cartopy_cache/Stamen_toner` instead of `/tmp/cartopy_cache/Stamen`. #### Code to reproduce ``` import matplotlib.pyplot as plt import cartopy.io.img_tiles as cimgt def plot(tile): fig = plt.figure() ax = fig.add_subplot(1, 1, 1, projection=tile.crs) ax.set_extent([-2, 2, 49, 50]) ax.add_image(tile, 10) stamen_terrain = cimgt.Stamen('terrain', cache="/tmp/cartopy_cache") stamen_toner = cimgt.Stamen('toner', cache="/tmp/cartopy_cache") google_tiles = cimgt.GoogleTiles(cache="/tmp/cartopy_cache") plot(stamen_terrain) plot(stamen_toner) plot(google_tiles) ``` cartopy version: '0.21.1'
open
2023-01-08T13:58:27Z
2023-01-18T22:16:09Z
https://github.com/SciTools/cartopy/issues/2123
[ "Type: Enhancement" ]
apatlpo
0
Lightning-AI/pytorch-lightning
data-science
19,995
Load from checkpoint doesn't load model for inference
### Bug description Hi, I trained the attention unet model and by ModelCheckpoint, the trained model was saved as a .chpt file. Now, in inference, I encounter an error when loading the model. The same code was working around one month ago, I can't understand what is the problem. (version 2.3.0) ### What version are you seeing the problem on? master ### How to reproduce the bug ```python data_module.setup() tb_logger = TensorBoardLogger(save_dir='lightning_logs', name=f'{DatasetConfig.MODEL_NAME}') logger = pl.loggers.CSVLogger(save_dir='logs/', name=f'{DatasetConfig.MODEL_NAME}') model_checkpoint = ModelCheckpoint( monitor="valid_iou", mode="max", filename="ckpt_{epoch:03d}-vloss_{valid_loss:.4f}_vf1_{valid_iou:.4f}", auto_insert_metric_name=False, ) trainer = pl.Trainer(accelerator="auto", devices="auto", strategy="auto", max_epochs=DatasetConfig.NUM_EPOCHS, # enable_model_summary=False, callbacks=[model_checkpoint, lr_rate_monitor], precision="16-mixed", # limit_val_batches=0.1, val_check_interval=len(train_loader), num_sanity_val_steps=0, logger=[logger, tb_logger] ) trainer.fit(model, data_module) ##inference CKPT_PATH = "logs/deeplabv3p-test/version_2/checkpoints/ckpt_023-vloss_0.2485_vf1_0.5624.ckpt" model = MedicalSegmentationModel.load_from_checkpoint(checkpoint_path=CKPT_PATH) model = model.eval() ``` ### Error messages and logs ``` # Error messages and logs here please ``` RuntimeError Traceback (most recent call last) [<ipython-input-51-1b2363e13f62>](https://localhost:8080/#) in <cell line: 1>() ----> 1 model = MedicalSegmentationModel.load_from_checkpoint(checkpoint_path=CKPT_PATH) 2 # model = model.to(device) 3 model = model.eval() 4 frames [/usr/local/lib/python3.10/dist-packages/lightning/pytorch/utilities/model_helpers.py](https://localhost:8080/#) in wrapper(*args, **kwargs) 123 " Please call it on the class type and make sure the return value is used." 124 ) --> 125 return self.method(cls, *args, **kwargs) 126 127 return wrapper [/usr/local/lib/python3.10/dist-packages/lightning/pytorch/core/module.py](https://localhost:8080/#) in load_from_checkpoint(cls, checkpoint_path, map_location, hparams_file, strict, **kwargs) 1584 1585 """ -> 1586 loaded = _load_from_checkpoint( 1587 cls, # type: ignore[arg-type] 1588 checkpoint_path, [/usr/local/lib/python3.10/dist-packages/lightning/pytorch/core/saving.py](https://localhost:8080/#) in _load_from_checkpoint(cls, checkpoint_path, map_location, hparams_file, strict, **kwargs) 89 return _load_state(cls, checkpoint, **kwargs) 90 if issubclass(cls, pl.LightningModule): ---> 91 model = _load_state(cls, checkpoint, strict=strict, **kwargs) 92 state_dict = checkpoint["state_dict"] 93 if not state_dict: [/usr/local/lib/python3.10/dist-packages/lightning/pytorch/core/saving.py](https://localhost:8080/#) in _load_state(cls, checkpoint, strict, **cls_kwargs_new) 185 186 # load the state_dict on the model automatically --> 187 keys = obj.load_state_dict(checkpoint["state_dict"], strict=strict) 188 189 if not strict: [/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in load_state_dict(self, state_dict, strict, assign) 2187 2188 if len(error_msgs) > 0: -> 2189 raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( 2190 self.__class__.__name__, "\n\t".join(error_msgs))) 2191 return _IncompatibleKeys(missing_keys, unexpected_keys) RuntimeError: Error(s) in loading state_dict for MedicalSegmentationModel: Missing key(s) in state_dict: "model.decoder.blocks.x_0_0.conv1.0.weight", "model.decoder.blocks.x_0_0.conv1.1.weight", "model.decoder.blocks.x_0_0.conv1.1.bias", "model.decoder.blocks.x_0_0.conv1.1.running_mean", "model.decoder.blocks.x_0_0.conv1.1.running_var", "model.decoder.blocks.x_0_0.conv2.0.weight", "model.decoder.blocks.x_0_0.conv2.1.weight", "model.decoder.blocks.x_0_0.conv2.1.bias", "model.decoder.blocks.x_0_0.conv2.1.running_mean", "model.decoder.blocks.x_0_0.conv2.1.running_var", "model.decoder.blocks.x_0_1.conv1.0.weight", "model.decoder.blocks.x_0_1.conv1.1.weight", "model.decoder.blocks.x_0_1.conv1.1.bias", "model.decoder.blocks.x_0_1.conv1.1.running_mean", "model.decoder.blocks.x_0_1.conv1.1.running_var", "model.decoder.blocks.x_0_1.conv2.0.weight", "model.decoder.blocks.x_0_1.conv2.1.weight", "model.decoder.blocks.x_0_1.conv2.1.bias", "model.decoder.blocks.x_0_1.conv2.1.running_mean", "model.decoder.blocks.x_0_1.conv2.1.running_var", "model.decoder.blocks.x_1_1.conv1.0.weight", "model.decoder.blocks.x_1_1.conv1.1.weight", "model.decoder.blocks.x_1_1.conv1.1.bias", "model.decoder.blocks.x_1_1.conv1.1.running_mean", "model.decoder.blocks.x_1_1.conv1.1.running_var", "model.decoder.blocks.x_1_1.conv2.0.weight", "model.decoder.blocks.x_1_1.conv2.1.weight", "model.decoder.blocks.x_1_1.conv2.1.bias", "model.decoder.blocks.x_1_1.conv2.1.running_mean", "model.decoder.blocks.x_1_1.conv2.1.running_var", "model.decoder.blocks.x_0_2.conv1.0.weight", "model.decoder.bl... Unexpected key(s) in state_dict: "model.decoder.aspp.0.convs.0.0.weight", "model.decoder.aspp.0.convs.0.1.weight", "model.decoder.aspp.0.convs.0.1.bias", "model.decoder.aspp.0.convs.0.1.running_mean", "model.decoder.aspp.0.convs.0.1.running_var", "model.decoder.aspp.0.convs.0.1.num_batches_tracked", "model.decoder.aspp.0.convs.1.0.0.weight", "model.decoder.aspp.0.convs.1.0.1.weight", "model.decoder.aspp.0.convs.1.1.weight", "model.decoder.aspp.0.convs.1.1.bias", "model.decoder.aspp.0.convs.1.1.running_mean", "model.decoder.aspp.0.convs.1.1.running_var", "model.decoder.aspp.0.convs.1.1.num_batches_tracked", "model.decoder.aspp.0.convs.2.0.0.weight", "model.decoder.aspp.0.convs.2.0.1.weight", "model.decoder.aspp.0.convs.2.1.weight", "model.decoder.aspp.0.convs.2.1.bias", "model.decoder.aspp.0.convs.2.1.running_mean", "model.decoder.aspp.0.convs.2.1.running_var", "model.decoder.aspp.0.convs.2.1.num_batches_tracked", "model.decoder.aspp.0.convs.3.0.0.weight", "model.decoder.aspp.0.convs.3.0.1.weight", "model.decoder.aspp.0.convs.3.1.weight", "model.decoder.aspp.0.convs.3.1.bias", "model.decoder.aspp.0.convs.3.1.running_mean", "model.decoder.aspp.0.convs.3.1.running_var", "model.decoder.aspp.0.convs.3.1.num_batches_tracked", "model.decoder.aspp.0.convs.4.1.weight", "model.decoder.aspp.0.convs.4.2.weight", "model.decoder.aspp.0.convs.4.2.bias", "model.decoder.aspp.0.convs.4.2.running_mean", "model.decoder.aspp.0.convs.4.2.running_var", "model.decoder.aspp.0.convs.4.2.num_batche... size mismatch for model.segmentation_head.0.weight: copying a param with shape torch.Size([3, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 16, 3, 3]). ### Environment <details> <summary>Current environment</summary> ``` #- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow): #- PyTorch Lightning Version (e.g., 1.5.0): #- Lightning App Version (e.g., 0.5.2): #- PyTorch Version (e.g., 2.0): #- Python version (e.g., 3.9): #- OS (e.g., Linux): #- CUDA/cuDNN version: #- GPU models and configuration: #- How you installed Lightning(`conda`, `pip`, source): #- Running environment of LightningApp (e.g. local, cloud): ``` </details> ### More info _No response_ cc @awaelchli
closed
2024-06-19T13:41:24Z
2024-06-24T13:25:47Z
https://github.com/Lightning-AI/pytorch-lightning/issues/19995
[ "question", "checkpointing", "ver: 2.2.x" ]
golnaz-hs
1
serengil/deepface
machine-learning
491
!which dataset you used for training the VGGFace
Hello, I was wondering if you can give me the name of the dataset you use for training the VGGFace model? Thank you so much.
closed
2022-06-01T15:22:08Z
2022-06-02T08:54:31Z
https://github.com/serengil/deepface/issues/491
[ "question" ]
sayeh1994
1
graphql-python/graphene-django
django
799
Conflicting argument and object type name
I have a model with a property called `info`. I also have a mutation for creating an entry. I added `info` as an argument to the mutation. This mutation argument conflicts with the `info` argument of the `mutate()` method. This is the response I got when executing the mutation operation: > resolve() got multiple values for argument 'info' I'm aware that the `kwargs` argument creates a duplicate `info` argument, thus causing the error. To demonstrate the culprit: ``` mutation { createExample(info: "JC Denton. 23 years old.") { example { ... } } } ``` ``` ... class CreateExampleMutation(graphene.Mutation): class Arguments: info = graphene.String() ... ``` ``` ... def mutate(self, info, **kwargs): pass # -> def mutate(self, info, info): pass ... ``` I can accept any solution that does not require properties to be explicitly set.
closed
2019-10-14T16:50:04Z
2019-10-23T21:47:51Z
https://github.com/graphql-python/graphene-django/issues/799
[]
bluday
1
Netflix/metaflow
data-science
1,489
AWS Batch Error: uncaught exception: address -> / what() -> / type ->
I am deploying a fairly memory intensive flow on AWS Batch and running into the following uncaught exception and error. ``` uncaught exception: address -> 0x1079213c0 what() -> "map::at: key not found" type -> std::out_of_range AWS Batch error: CannotInspectContainerError: Could not transition to inspecting; timed out after waiting 30s This could be a transient error. Use @retry to retry. ``` the flow will retry on a new instance, however the warning and error persists. I have addressed transient errors in the past, it is the uncaught exception snippet that I have never run into before and could not find a solution for. I have attempted to increase the resources allocated for this particular step, however that does not solve the issue. Is there a known root of the problem and solution? Thanks in advance
closed
2023-07-18T21:22:31Z
2023-07-18T21:28:10Z
https://github.com/Netflix/metaflow/issues/1489
[]
elutins
1
pydantic/FastUI
fastapi
366
Add support for file links
Hi! We're looking to adopt FastUI for better experience of using and testing our backend web services. It's going down well here so far, so thanks for the great work. As I see it, there are two kinds of link component FastUI supports: 1. links to routes defined within the application, with addresses with respect to a root for the current router (at minimum, your host URL) 2. links that begin with "http" for external stuff, which is the only exception to (1). Could you extend the second of these to cover paths beginning with "file" as well please? Thanks in advance for considering this.
closed
2024-11-13T21:16:16Z
2024-11-16T14:50:23Z
https://github.com/pydantic/FastUI/issues/366
[]
JamesRamsden-Naimuri
2
lukas-blecher/LaTeX-OCR
pytorch
3
Training?
Really Nice Work. Can you please provide a proper pipeline on how to train with own data. I tried to use your formulae images and config (from google drive) to train but got error RuntimeError: merge_sort: failed to synchronize: device-side assert triggered Can you help in training. Thanks
closed
2021-03-18T03:25:18Z
2021-03-19T08:54:35Z
https://github.com/lukas-blecher/LaTeX-OCR/issues/3
[]
PhenomenalOnee
2
iperov/DeepFaceLab
machine-learning
962
"GeForce RTX 3090 doesnt response, terminating it."
Extraction on the 30x build either is very slow, or doesn't work at all. The slowness also occurs with merging, but the "doesn't respond" error is exclusive to extraction.
open
2020-12-04T21:08:03Z
2023-07-24T07:50:15Z
https://github.com/iperov/DeepFaceLab/issues/962
[]
GuusDeKroon
6
plotly/dash
plotly
2,442
[BUG] Case insensitive filter does not work in documentation example
When looking at the first data table under the heading "Back-end Filtering" on [the DataTable Filtering page](https://dash.plotly.com/datatable/filtering), viewed on Chrome browser **Describe the bug** Case-insensitive filter does not work on the country column. For example, I type "Albania" in the filter input under "country", press enter and see the row corresponding to Albania appear. Then, I replace "Albania" with "albania" and press enter, and the row disappears, leaving me with an empty table. I press the button next to the filter input to toggle the case sensitivity and repeat this, and the same result occurs. The search with "Albania" returns the correct row, but the search with "albania" returns no rows. I am expecting that filtering for "albania" should return one row in one of these instances.
closed
2023-03-03T23:50:46Z
2024-07-25T13:04:10Z
https://github.com/plotly/dash/issues/2442
[]
jseyhun
4
praw-dev/praw
api
1,948
Unable to get flairs in a subreddit
### Describe the Bug hi everyone m trying to run this code but it returns the error ``` for flair in reddit.subreddit("minefortest").flair(limit=None): print(flair) ``` Getting - prawcore.exceptions.Forbidden: received 403 HTTP response (not a auth error) ### Desired Result <List of flairs in a subreddit> ### Code to reproduce the bug ```Python for flair in reddit.subreddit("minefortest").flair(limit=None): print(flair) ``` ### The `Reddit()` initialization in my code example does not include the following parameters to prevent credential leakage: `client_secret`, `password`, or `refresh_token`. - [X] Yes ### Relevant Logs ```Shell None ``` ### This code has previously worked as intended. Yes ### Operating System/Environment macOS Monterey 12.5.1 ### Python Version 3.7.16 ### PRAW Version 7.7.0 ### Prawcore Version 2.3.0 ### Anything else? _No response_
closed
2023-03-17T04:35:52Z
2024-08-25T09:01:10Z
https://github.com/praw-dev/praw/issues/1948
[]
pnpninja
3
python-restx/flask-restx
api
248
The full example needs a few notes on usage and RestX needs to fail on JSON parsing
EDIT: This was originally entitled "Full example does not work, and I can't figure out why." Digging in, I discovered the reason, as seen in the two following comments. ### **Code** The code I'm using is here: https://flask-restx.readthedocs.io/en/latest/example.html Copied and pasted it. The only change I made was to run the debug server on 8080. ### **Repro Steps** (if applicable) 1. python3 example.py 2. Ran `curl http://localhost:8080/todos/`. That printed out the ones already created 2. Ran `curl http://localhost:8080/todos/ -d "data=Remember the milk" -X POST` ### **Expected Behavior** I expected a Todo to be created ### **Actual Behavior** ``` 127.0.0.1 - - [04/Nov/2020 14:22:29] "GET /todos/ HTTP/1.1" 200 - [2020-11-04 14:23:31,567] ERROR in app: Exception on /todos/ [POST] Traceback (most recent call last): File "/usr/lib/python3/dist-packages/flask/app.py", line 1949, in full_dispatch_request rv = self.dispatch_request() File "/usr/lib/python3/dist-packages/flask/app.py", line 1935, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/usr/local/lib/python3.8/dist-packages/flask_restx/api.py", line 375, in wrapper resp = resource(*args, **kwargs) File "/usr/lib/python3/dist-packages/flask/views.py", line 89, in view return self.dispatch_request(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/flask_restx/resource.py", line 44, in dispatch_request resp = meth(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/flask_restx/marshalling.py", line 248, in wrapper resp = f(*args, **kwargs) File "/home/jkugler/app/example.py", line 66, in post return DAO.create(api.payload), 201 File "/home/jkugler/app/example.py", line 32, in create todo['id'] = self.counter = self.counter + 1 TypeError: 'NoneType' object does not support item assignment ``` Debugging uncovered that the data passed to `def create(self, data)` was `None` which means api.payload here: ```python def post(self): '''Create a new task''' return DAO.create(api.payload), 201 ``` is `None`. I have no idea why. I've dumped requeset headers, as well as the contentx of the `wsgi.input` object, and everything is there. ### **Error Messages/Stack Trace** See above. ### **Environment** - Python version: Python 3.8.6 - Flask version: 1.1.1 - Flask-RESTX version: 0.2.0 - Other installed Flask extensions ``` Flask-BabelEx 0.9.3 Flask-Compress 1.4.0 Flask-Gravatar 0.4.2 Flask-Login 0.4.1 Flask-Mail 0.9.1 Flask-Migrate 2.5.2 Flask-Paranoid 0.2.0 Flask-Principal 0.4.0 Flask-Security 1.7.5 Flask-SQLAlchemy 2.1 Flask-WTF 0.14.2 ```
open
2020-11-04T23:28:38Z
2021-03-03T19:38:30Z
https://github.com/python-restx/flask-restx/issues/248
[ "bug" ]
jkugler
3
kevlened/pytest-parallel
pytest
46
AttributeError: 'NoneType' object has no attribute '_registry' with pytest 5.3.0
Hi there, I run pytest with pytest-parallel (v 0.0.9) in a Gitlab CI-Pipeline. Without changing my code the pipeline jobs stopped working. The only thing that change was the pytest version from 5.2.4 to 5.3.0. Also the pipeline job doesn't seem to stop. It's stuck in the status running until I cancel it. The failed command is: ` - python setup.py install && pytest --workers 4` where `setup.py install` is our project specific setup script. The hopefully relevant output: ``` ============================= test session starts ============================== platform linux -- Python 3.7.5, pytest-5.3.0, py-1.8.0, pluggy-0.13.0 rootdir: /builds/fsd/odx-ninja plugins: parallel-0.0.9 collected 70 items pytest-parallel: 4 workers (processes), 1 test per worker (thread) ....Exception in thread Thread-1: Traceback (most recent call last): File "/usr/local/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/local/lib/python3.7/site-packages/pytest_parallel/__init__.py", line 81, in run run_test(self.session, item, None) File "/usr/local/lib/python3.7/site-packages/pytest_parallel/__init__.py", line 51, in run_test item.ihook.pytest_runtest_protocol(item=item, nextitem=nextitem) File "/usr/local/lib/python3.7/site-packages/pluggy/hooks.py", line 286, in __call__ return self._hookexec(self, self.get_hookimpls(), kwargs) File "/usr/local/lib/python3.7/site-packages/pluggy/manager.py", line 92, in _hookexec return self._inner_hookexec(hook, methods, kwargs) File "/usr/local/lib/python3.7/site-packages/pluggy/manager.py", line 86, in <lambda> firstresult=hook.spec.opts.get("firstresult") if hook.spec else False, File "/usr/local/lib/python3.7/site-packages/pluggy/callers.py", line 208, in _multicall return outcome.get_result() File "/usr/local/lib/python3.7/site-packages/pluggy/callers.py", line 80, in get_result raise ex[1].with_traceback(ex[2]) File "/usr/local/lib/python3.7/site-packages/pluggy/callers.py", line 187, in _multicall res = hook_impl.function(*args) File "/usr/local/lib/python3.7/site-packages/_pytest/runner.py", line 82, in pytest_runtest_protocol item.ihook.pytest_runtest_logfinish(nodeid=item.nodeid, location=item.location) File "/usr/local/lib/python3.7/site-packages/pluggy/hooks.py", line 286, in __call__ return self._hookexec(self, self.get_hookimpls(), kwargs) File "/usr/local/lib/python3.7/site-packages/pluggy/manager.py", line 92, in _hookexec return self._inner_hookexec(hook, methods, kwargs) File "/usr/local/lib/python3.7/site-packages/pluggy/manager.py", line 86, in <lambda> firstresult=hook.spec.opts.get("firstresult") if hook.spec else False, File "/usr/local/lib/python3.7/site-packages/pluggy/callers.py", line 208, in _multicall return outcome.get_result() File "/usr/local/lib/python3.7/site-packages/pluggy/callers.py", line 80, in get_result raise ex[1].with_traceback(ex[2]) File "/usr/local/lib/python3.7/site-packages/pluggy/callers.py", line 187, in _multicall res = hook_impl.function(*args) File "/usr/local/lib/python3.7/site-packages/_pytest/terminal.py", line 468, in pytest_runtest_logfinish main_color, _ = _get_main_color(self.stats) File "/usr/local/lib/python3.7/site-packages/_pytest/terminal.py", line 1102, in _get_main_color for found_type in stats: File "<string>", line 2, in __iter__ File "/usr/local/lib/python3.7/multiprocessing/managers.py", line 825, in _callmethod proxytype = self._manager._registry[token.typeid][-1] AttributeError: 'NoneType' object has no attribute '_registry' ``` When the `--workers 4` was removed the CI pipeline didn't fail. To avoid this we now fixed the pytest version to 5.2.4. I also created an issue at pytest: https://github.com/pytest-dev/pytest/issues/6254 I hope I could help. Best regards
closed
2019-11-21T11:51:57Z
2019-11-25T09:11:22Z
https://github.com/kevlened/pytest-parallel/issues/46
[]
JuleBert
5
holoviz/panel
jupyter
7,153
FileDropper accepted_filetypes doesn't seem to properly validate
```python import panel as pn pn.extension() self._file_input = pn.widgets.FileDropper( multiple=True, accepted_filetypes=[".csv", ".parquet", ".parq", ".json", ".xlsx"], ) ``` <img width="710" alt="image" src="https://github.com/user-attachments/assets/98afbf7d-cb82-45e5-b8fe-aa1011682df5">
open
2024-08-15T22:51:09Z
2025-02-20T15:04:49Z
https://github.com/holoviz/panel/issues/7153
[]
ahuang11
5
vaexio/vaex
data-science
2,170
Example from the tutorial throws an error when it tried to run df.viz.heatmap
**Description** I tried to play with the code found on tutorial but `df.viz.heatmap(df.x, df.y, what=np.log(vaex.stat.count()+1), selection=[None, True], limits='99.7%')` yields an error. I downloaded .ipynb from [here](https://vaex.io/docs/tutorial.html) ```python df.select(df.x > 0) @vaex.jupyter.interactive_selection(df) def plot(*args, **kwargs): print("Mean x for the selection is:", df.mean(df.x, selection=True)) df.viz.heatmap(df.x, df.y, what=np.log(vaex.stat.count()+1), selection=[None, True], limits='99.7%') plt.show() ``` **Software information** - Vaex version (`import vaex; vaex.__version__)`: ```python Python 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import vaex; vaex.__version__ {'vaex': '4.11.1', 'vaex-core': '4.11.1', 'vaex-viz': '0.5.2', 'vaex-hdf5': '0.12.3', 'vaex-server': '0.8.1', 'vaex-astro': '0.9.1', 'vaex-jupyter': '0.8.0', 'vaex-ml': '0.18.0'} ``` - Vaex was installed via: pip / conda-forge / from source pip - OS: Docker container running on Windows Host machine. The container is `rust:latest` `cat /etc/os-release` yields ``` PRETTY_NAME="Debian GNU/Linux 11 (bullseye)" NAME="Debian GNU/Linux" VERSION_ID="11" VERSION="11 (bullseye)" VERSION_CODENAME=bullseye ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/" ``` **Additional information** ``` ```python Mean x for the selection is: 5.138019792304514 --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) File /usr/local/lib/python3.9/dist-packages/vaex/jupyter/utils.py:254, in interactive_selection.<locals>.wrapped.<locals>._selection_changed(df, selection_name) 252 with output: 253 clear_output(wait=True) --> 254 f_interact(df, selection_name) Input In [190], in plot(*args, **kwargs) 2 @vaex.jupyter.interactive_selection(df) 3 def plot(*args, **kwargs): 4 print("Mean x for the selection is:", df.mean(df.x, selection=True)) ----> 5 df.viz.heatmap(df.x, df.y, what=np.log(vaex.stat.count()+1), selection=[None, True], limits='99.7%') 6 plt.show() File /usr/local/lib/python3.9/dist-packages/vaex/viz/mpl.py:32, in viz_method.<locals>.wrapper(self, *args, **kwargs) 30 @functools.wraps(f) 31 def wrapper(self, *args, **kwargs): ---> 32 return f(self.df, *args, **kwargs) File /usr/local/lib/python3.9/dist-packages/vaex/viz/mpl.py:565, in heatmap(self, x, y, z, what, vwhat, reduce, f, normalize, normalize_axis, vmin, vmax, shape, vshape, limits, grid, colormap, figsize, xlabel, ylabel, aspect, tight_layout, interpolation, show, colorbar, colorbar_label, selection, selection_labels, title, background_color, pre_blend, background_alpha, visual, smooth_pre, smooth_post, wrap, wrap_columns, return_extra, hardcopy) 563 for i, (binby, limits) in enumerate(zip(x, xlimits)): 564 for j, what in enumerate(whats): --> 565 grid = grid_of_grids[i][j].get() 566 total_grid[i, j, :, :] = grid[:, None, ...] 567 labels["what"] = what_labels File /usr/local/lib/python3.9/dist-packages/aplus/__init__.py:170, in Promise.get(self, timeout) 168 return self._value 169 else: --> 170 raise self._reason File /usr/local/lib/python3.9/dist-packages/vaex/promise.py:121, in Promise.then.<locals>.callAndReject(r) 119 try: 120 if aplus._isFunction(failure): --> 121 ret.fulfill(failure(r)) 122 else: 123 ret.reject(r) File /usr/local/lib/python3.9/dist-packages/vaex/delayed.py:38, in _log_error.<locals>._wrapped(exc) 35 print(f"*** DEBUG: Error from {name}", exc) 36 # import vaex 37 # vaex.utils.print_stack_trace() ---> 38 raise exc File /usr/local/lib/python3.9/dist-packages/vaex/promise.py:121, in Promise.then.<locals>.callAndReject(r) 119 try: 120 if aplus._isFunction(failure): --> 121 ret.fulfill(failure(r)) 122 else: 123 ret.reject(r) File /usr/local/lib/python3.9/dist-packages/vaex/delayed.py:38, in _log_error.<locals>._wrapped(exc) 35 print(f"*** DEBUG: Error from {name}", exc) 36 # import vaex 37 # vaex.utils.print_stack_trace() ---> 38 raise exc [... skipping similar frames: Promise.then.<locals>.callAndReject at line 121 (4 times), _log_error.<locals>._wrapped at line 38 (3 times)] File /usr/local/lib/python3.9/dist-packages/vaex/delayed.py:38, in _log_error.<locals>._wrapped(exc) 35 print(f"*** DEBUG: Error from {name}", exc) 36 # import vaex 37 # vaex.utils.print_stack_trace() ---> 38 raise exc File /usr/local/lib/python3.9/dist-packages/vaex/promise.py:121, in Promise.then.<locals>.callAndReject(r) 119 try: 120 if aplus._isFunction(failure): --> 121 ret.fulfill(failure(r)) 122 else: 123 ret.reject(r) File /usr/local/lib/python3.9/dist-packages/vaex/progress.py:91, in ProgressTree.exit_on.<locals>.error(arg) 89 def error(arg): 90 self.exit() ---> 91 raise arg File /usr/local/lib/python3.9/dist-packages/vaex/promise.py:121, in Promise.then.<locals>.callAndReject(r) 119 try: 120 if aplus._isFunction(failure): --> 121 ret.fulfill(failure(r)) 122 else: 123 ret.reject(r) File /usr/local/lib/python3.9/dist-packages/vaex/delayed.py:38, in _log_error.<locals>._wrapped(exc) 35 print(f"*** DEBUG: Error from {name}", exc) 36 # import vaex 37 # vaex.utils.print_stack_trace() ---> 38 raise exc File /usr/local/lib/python3.9/dist-packages/vaex/promise.py:121, in Promise.then.<locals>.callAndReject(r) 119 try: 120 if aplus._isFunction(failure): --> 121 ret.fulfill(failure(r)) 122 else: 123 ret.reject(r) File /usr/local/lib/python3.9/dist-packages/vaex/delayed.py:38, in _log_error.<locals>._wrapped(exc) 35 print(f"*** DEBUG: Error from {name}", exc) 36 # import vaex 37 # vaex.utils.print_stack_trace() ---> 38 raise exc [... skipping similar frames: _log_error.<locals>._wrapped at line 38 (1 times), Promise.then.<locals>.callAndReject at line 121 (1 times)] File /usr/local/lib/python3.9/dist-packages/vaex/promise.py:121, in Promise.then.<locals>.callAndReject(r) 119 try: 120 if aplus._isFunction(failure): --> 121 ret.fulfill(failure(r)) 122 else: 123 ret.reject(r) File /usr/local/lib/python3.9/dist-packages/vaex/delayed.py:38, in _log_error.<locals>._wrapped(exc) 35 print(f"*** DEBUG: Error from {name}", exc) 36 # import vaex 37 # vaex.utils.print_stack_trace() ---> 38 raise exc File /usr/local/lib/python3.9/dist-packages/vaex/execution.py:564, in ExecutorLocal.process_tasks(self, thread_index, i1, i2, chunks, run, df, tasks) 562 try: 563 if task.see_all: --> 564 task_part.process(thread_index, i1, i2, filter_mask, selections, blocks) 565 else: 566 task_part.process(thread_index, i1, i2, filter_mask, selections, blocks) File /usr/local/lib/python3.9/dist-packages/vaex/cpu.py:763, in TaskPartAggregation.process(self, thread_index, i1, i2, filter_mask, selection_masks, blocks) 761 # we only have 1 data mask, since it's locally combined 762 if selection_mask is not None: --> 763 agg.set_data_mask(thread_index, selection_mask) 764 references.extend([selection_mask]) 765 else: RuntimeError: Expected a 1d array ``` ```
open
2022-08-14T14:12:34Z
2022-11-02T08:06:52Z
https://github.com/vaexio/vaex/issues/2170
[]
thomas-k-cameron
1
2noise/ChatTTS
python
711
交流群已满
如题,交流群4已满,是否会创建新的群聊?
closed
2024-08-22T11:10:12Z
2024-08-22T13:43:43Z
https://github.com/2noise/ChatTTS/issues/711
[ "documentation" ]
zzhdbw
1
serengil/deepface
deep-learning
464
Retina Face ignores "DEEPFACE_HOME" when downloading.
The issue is clear. RetinaFace downloads model to path according to it's own code [here](https://github.com/serengil/retinaface/blob/34b1ec11a4a0beee2ebd2c095742b3d070e23fb5/retinaface/model/retinaface_model.py#L18-L22). While all other wrappers rely on the output of [get_deepface_home()](https://github.com/serengil/deepface/blob/e4c3883f5902a4619725f4c2df4646d107f1e25a/deepface/commons/functions.py#L59-L60) i.e. [dlib wrapper](https://github.com/serengil/deepface/blob/e4c3883f5902a4619725f4c2df4646d107f1e25a/deepface/detectors/DlibWrapper.py#L10-L15) A few ways to solve, but since this change would require changes to retinaface, guidance from author would be helpful. Simplest solution IMO, is too allow passing in of home directory to [retinate_face.build_model()](https://github.com/serengil/retinaface/blob/34b1ec11a4a0beee2ebd2c095742b3d070e23fb5/retinaface/RetinaFace.py#L34) Then change in deep face would just be to pass in correct home dir in [retinaface call](https://github.com/serengil/deepface/blob/e4c3883f5902a4619725f4c2df4646d107f1e25a/deepface/detectors/RetinaFaceWrapper.py#L6)
closed
2022-04-26T15:45:11Z
2022-05-22T20:14:01Z
https://github.com/serengil/deepface/issues/464
[ "enhancement" ]
daviesthomas
3
ultralytics/ultralytics
pytorch
18,693
TFLite python example not predicting any bounding boxes
### Search before asking - [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report. ### Ultralytics YOLO Component _No response_ ### Bug ultralytics\examples\YOLOv8-TFLite-Python\main.py is throwing AttributeError: 'tuple' object has no attribute 'flatten' ### Environment OS Windows-10-10.0.22631-SP0 Environment Windows Python 3.8.10 Install pip RAM 63.69 GB Disk 342.0/3814.7 GB CPU 13th Gen Intel Core(TM) i9-13950HX CPU count 32 GPU None GPU count None CUDA None numpy ✅ 1.24.3>=1.23.0 numpy ✅ 1.24.3<2.0.0; sys_platform == "darwin" matplotlib ✅ 3.7.5>=3.3.0 opencv-python ✅ 4.10.0.84>=4.6.0 pillow ✅ 10.4.0>=7.1.2 pyyaml ✅ 6.0.2>=5.3.1 requests ✅ 2.32.3>=2.23.0 scipy ✅ 1.10.1>=1.4.1 torch ✅ 2.4.1>=1.8.0 torch ✅ 2.4.1!=2.4.0,>=1.8.0; sys_platform == "win32" torchvision ✅ 0.19.1>=0.9.0 tqdm ✅ 4.67.1>=4.64.0 psutil ✅ 6.1.1 py-cpuinfo ✅ 9.0.0 pandas ✅ 2.0.3>=1.1.4 seaborn ✅ 0.13.2>=0.11.0 ultralytics-thop ✅ 2.0.13>=2.0.0 ### Minimal Reproducible Example python main.py --model yolov8n_full_integer_quant.tflite --img image.jpg --conf 0.25 --iou 0.45 --metadata "metadata.yaml" ### Additional _No response_ ### Are you willing to submit a PR? - [ ] Yes I'd like to help by submitting a PR!
open
2025-01-15T12:06:43Z
2025-01-22T09:38:33Z
https://github.com/ultralytics/ultralytics/issues/18693
[ "bug", "non-reproducible", "exports" ]
vjayd
4
gradio-app/gradio
data-science
9,991
Using `gr.render()` leads to RuntimeError: dictionary changed size during iteration
### Describe the bug Hello. I often get this error stack when I use gradio to draw a complex page in one page. I think it is same issue #9410. I tried to find, why this issue is occurred. I created a reproduction code by reading the relevant code while looking at the error stack. I think the reason is `render` event handler changes the dictionary(`state.blocks_config.blocks`) while processing other `render` event handler. (The `state` in here is argument of `process_api` function, which is `SessionState` type) I don't know exactly, but when I printed the length of the dictionary(`state.blocks_config.blocks`) before and after the `process_api` function, the length of the dictionary that was output at the end of the process_api function changed. ``` Start of process_api : 3 End of process_api : 104 ~ 205 ``` It seems that two `render` functions share one `session state` and refer to it at the same time. I set the `render` function with `concurrency_id="render"` and `concurrency_limit=1` so that the `render` function does not occur at the same time, so I could solve the problem temporarily. This error is occurred in both version `4.44.1` and `5.6.0`. ### Have you searched existing issues? 🔎 - [X] I have searched and found no existing issues ### Reproduction ```python import gradio as gr with gr.Blocks() as demo: with gr.Row(): @gr.render() def render(): for i in range(100): gr.Textbox(i) @gr.render() def render(): for i in range(100): gr.Textbox(i) if __name__ == "__main__": demo.queue(default_concurrency_limit=64).launch() ``` ### Screenshot ![image](https://github.com/user-attachments/assets/be8d70dd-5add-48ef-a879-c669aa62689b) One of left or right is not loaded because of RuntimeError. Sometimes both are loaded. ### Logs ```shell ================ Gradio-5.6.0 =============================== Traceback (most recent call last): File "/home/-/anaconda3/envs/gradio-test/lib/python3.12/site-packages/gradio/queueing.py", line 624, in process_events response = await route_utils.call_process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/-/anaconda3/envs/gradio-test/lib/python3.12/site-packages/gradio/route_utils.py", line 323, in call_process_api output = await app.get_blocks().process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/-/anaconda3/envs/gradio-test/lib/python3.12/site-packages/gradio/blocks.py", line 2071, in process_api output["render_config"] = state.blocks_config.get_config( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/-/anaconda3/envs/gradio-test/lib/python3.12/site-packages/gradio/blocks.py", line 900, in get_config for _id, block in self.blocks.items(): RuntimeError: dictionary changed size during iteration ================ Gradio-4.44.1============================ Traceback (most recent call last): File "/home/-/anaconda3/envs/gradio-test/lib/python3.12/site-packages/gradio/queueing.py", line 536, in process_events response = await route_utils.call_process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/-/anaconda3/envs/gradio-test/lib/python3.12/site-packages/gradio/route_utils.py", line 322, in call_process_api output = await app.get_blocks().process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/-/anaconda3/envs/gradio-test/lib/python3.12/site-packages/gradio/blocks.py", line 1986, in process_api output["render_config"] = state.blocks_config.get_config( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/-/anaconda3/envs/gradio-test/lib/python3.12/site-packages/gradio/blocks.py", line 869, in get_config for _id, block in self.blocks.items(): RuntimeError: dictionary changed size during iteration ``` ### System Info ```shell ================ Gradio-5.6.0 =============================== Gradio Environment Information: ------------------------------ Operating System: Linux gradio version: 5.6.0 gradio_client version: 1.4.3 ------------------------------------------------ gradio dependencies in your environment: aiofiles: 23.2.1 anyio: 4.6.2.post1 audioop-lts is not installed. fastapi: 0.115.5 ffmpy: 0.4.0 gradio-client==1.4.3 is not installed. httpx: 0.27.2 huggingface-hub: 0.26.2 jinja2: 3.1.4 markupsafe: 2.1.5 numpy: 2.1.3 orjson: 3.10.11 packaging: 24.2 pandas: 2.2.3 pillow: 11.0.0 pydantic: 2.9.2 pydub: 0.25.1 python-multipart==0.0.12 is not installed. pyyaml: 6.0.2 ruff: 0.7.4 safehttpx: 0.1.1 semantic-version: 2.10.0 starlette: 0.41.3 tomlkit==0.12.0 is not installed. typer: 0.13.1 typing-extensions: 4.12.2 urllib3: 2.2.3 uvicorn: 0.32.0 authlib; extra == 'oauth' is not installed. itsdangerous; extra == 'oauth' is not installed. gradio_client dependencies in your environment: fsspec: 2024.10.0 httpx: 0.27.2 huggingface-hub: 0.26.2 packaging: 24.2 typing-extensions: 4.12.2 websockets: 12.0 ================ Gradio-4.44.1 =============================== Gradio Environment Information: ------------------------------ Operating System: Linux gradio version: 4.44.1 gradio_client version: 1.3.0 ------------------------------------------------ gradio dependencies in your environment: aiofiles: 23.2.1 anyio: 4.6.2.post1 fastapi: 0.115.5 ffmpy: 0.4.0 gradio-client==1.3.0 is not installed. httpx: 0.27.2 huggingface-hub: 0.26.2 importlib-resources: 6.4.5 jinja2: 3.1.4 markupsafe: 2.1.5 matplotlib: 3.9.2 numpy: 2.1.3 orjson: 3.10.11 packaging: 24.2 pandas: 2.2.3 pillow: 10.4.0 pydantic: 2.9.2 pydub: 0.25.1 python-multipart: 0.0.17 pyyaml: 6.0.2 ruff: 0.7.4 semantic-version: 2.10.0 tomlkit==0.12.0 is not installed. typer: 0.13.1 typing-extensions: 4.12.2 urllib3: 2.2.3 uvicorn: 0.32.0 authlib; extra == 'oauth' is not installed. itsdangerous; extra == 'oauth' is not installed. gradio_client dependencies in your environment: fsspec: 2024.10.0 httpx: 0.27.2 huggingface-hub: 0.26.2 packaging: 24.2 typing-extensions: 4.12.2 websockets: 12.0 ``` ### Severity Blocking usage of gradio
closed
2024-11-19T03:01:52Z
2024-11-28T17:42:16Z
https://github.com/gradio-app/gradio/issues/9991
[ "bug", "pending clarification" ]
BLESS11186
6
WZMIAOMIAO/deep-learning-for-image-processing
deep-learning
580
数据集存放位置
博主您好!第一次看您的视频,使用Pytorch版的Msak R-CNN,下载并解压了VOC2012数据集。请问是要新建文件夹data,将解压的数据集直接放到data文件夹中,然后修改成default='/data/VOCdevkit'就可以了吗?我这么做报错:找不到文件'/data/VOCdevkit',问题有点蠢...但还是麻烦您抽空解答一下,谢谢!
closed
2022-06-22T14:44:27Z
2022-06-25T11:30:21Z
https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/580
[]
20000131
1
marcomusy/vedo
numpy
311
How to calculate the intersection point between line and mask
If there is a line and a mask, how can I obtain the intersection point? For example: mask = np.zeros([300, 300, 50]) mask[:, :, 10:30] = 1 Then, the line is: p1 = [100, 100, -1] p2 = [100, 100, 1] The intersection point should be: [100, 100, 10] and [100, 100, 29].
open
2021-02-10T07:57:25Z
2021-02-16T01:35:54Z
https://github.com/marcomusy/vedo/issues/311
[]
zhang-qiang-github
3
Esri/arcgis-python-api
jupyter
1,380
Add hard date parameters to item usage method
**Is your feature request related to a problem? Please describe.** I'm frustrated when trying to use the item.usage method to obtain AGO item usage history data, because it only allows me to enter relative dates as parameters to filter dates. Example '7D' for past 7 days, or '30D' for past 30 days. This makes the results different depending on when you run the query. It's also frustrating because i cannot access the full 2 year usage history of my items, because the maximum filter I can enter for the date_range parameter is 1Y for 1 year. It would be easier for me if I could just enter 2 dates for the start and end of my filter as parameters. ```python (This is an example from an esri sample of how I'd use it today with just relative date filter) gis = GIS("https://arcgis.com", "user_name") content = gis.content.search(query="", sort_field="title", sort_order="asc", max_items=2) for item in content: var_usage = item.usage ('1Y')) print(var_usage) ##at this point I'd actually be taking the results from items and writing them to a standalone table in a geodatabase or a feature service along with usage data from the other items in this loop ``` **Describe the solution you'd like** I'd like two parameters added for me to enter start and end dates for the date range filter. So for example, if I want to grab all usage for the 3rd quarter of 2022 (July, August, September), I'd enter a start date of 07-01-2022, and an end date of 09-31-2022). The results would contain one row/element for each day in that range, including the start and end dates, similar to how the results are returned now. It would also be helpful if the end date could be optional, so if I want to grab usage from the beginning of the year until now, I just enter a parameter of start_date='2022-01-01' and then leave the end_date parameter off. The results would hen contain one row for each day from the start date until the time I'm running my script. **Describe alternatives you've considered** Manually hitting the undocumented item usage REST API endpoint which I see used when the chart on the Item usage webpage is populated, and also used on the backend of the existing Python Item.Usage method. I see the rest endpoint does provide paramters for inputing start/end dates in UNIX time, although I see it limits results to 60 days so you have to iterate to get all the history: https://myorgname.arcgis.com/sharing/rest/portals/<orgId>/usage?f=json&startTime=1609459200000&endTime=1614470400000&period=1d&vars=num&groupby=name&etype=svcusg&name=<myItemId> **Additional context** Add any other context or screenshots about the feature request here.
closed
2022-11-07T16:02:08Z
2022-11-07T16:15:17Z
https://github.com/Esri/arcgis-python-api/issues/1380
[ "enhancement" ]
andrewrudin-austin
1
stanfordnlp/stanza
nlp
546
Can I use nltk as tokenizer and stanza as dependency parser?
I'd like to have a try, can dependency of a sentence improve the performance of the word vector. However, when I use nltk as the tokenizer for my model, I found that the index is totally different with which the Stanza give, so is there any way to change this?
closed
2020-12-04T07:52:41Z
2020-12-04T08:51:32Z
https://github.com/stanfordnlp/stanza/issues/546
[ "question" ]
Woodrow1026
2
slackapi/python-slack-sdk
asyncio
918
Enable to have additional fields in Bot/Installation instances
`Bot` and `Installation` classes should act as a dict object for the sake for more flexibility in apps. >At this point, `Installation` does not act as a dict value, it's not possible to have additional fields such as `installed_user_ids` in it. I think having custom fields in the class on a developer side is a common use case. I will create a task for it. With the change, you can go with overridden find_installation method. _Originally posted by @seratch in https://github.com/slackapi/bolt-python/issues/209#issuecomment-761259451_
closed
2021-01-15T23:53:16Z
2021-01-25T21:35:05Z
https://github.com/slackapi/python-slack-sdk/issues/918
[ "enhancement", "Version: 3x", "oauth" ]
seratch
1
tableau/server-client-python
rest-api
972
403132 error when publish twb file
I'm trying to publish a .twb workbook cause I can change the database name variable inside xml file, but everytime returns error , already tryed skip_connection_check=True. 403132: Failed connection check One or more data sources used by the workbook could not be reached ``` with server.auth.sign_in(tableau_auth): # create a workbook item wb_item = TSC.WorkbookItem(name='Contracts', project_id='b42fe4fc-b954-453e-b5a9-2943be65a680') # call the publish method with the workbook item wb_item = server.workbooks.publish(wb_item, './Contracts/Contracts.twb', 'Overwrite', skip_connection_check=True) ```
closed
2022-01-06T19:29:02Z
2023-02-15T08:06:29Z
https://github.com/tableau/server-client-python/issues/972
[ "help wanted" ]
h4n3h
3
iterative/dvc
data-science
10,202
pull: --allow-missing not behaving as expected
# Bug Report ## pull: --allow-missing not behaving as expected ## Description `dvc pull --allow-missing <target>` gives error about missing stuff. `dvc pull -R <target>` doesn't do a full recursive search for targets. ### Reproduce 1. multiple dvc pipelines 2. I think I forgot to dvc push on another computer 3. on this computer `dvc pull --allow-missing <target>` output: ``` dvc pull --allow-missing -vv .\pipelines\ensemble\dvc.yaml 2023-12-26 18:13:48,389 DEBUG: v3.26.2 (pip), CPython 3.10.13 on Windows-10-10.0.19045-SP0 2023-12-26 18:13:48,390 DEBUG: command: C:\Users\starrgw1\Anaconda3\envs\almds\Scripts\dvc pull --allow-missing -vv .\pipelines\ensemble\dvc.yaml 2023-12-26 18:13:48,390 TRACE: Namespace(quiet=0, verbose=2, cprofile=False, cprofile_dump=None, yappi=False, yappi_separate_threads=False, viztracer=False, viztracer_depth=None, viz tracer_async=False, pdb=False, instrument=False, instrument_open=False, show_stack=False, cd='.', cmd='pull', jobs=None, targets=['.\\pipelines\\ensemble\\dvc.yaml'], remote=None, al l_branches=False, all_tags=False, all_commits=False, force=False, with_deps=False, recursive=False, run_cache=False, glob=False, allow_missing=True, func=<class 'dvc.commands.data_sy nc.CmdDataPull'>, parser=DvcParser(prog='dvc', usage=None, description='Data Version Control', formatter_class=<class 'argparse.RawTextHelpFormatter'>, conflict_handler='error', add_ help=False)) 2023-12-26 18:13:49,148 TRACE: 15.99 ms in collecting stages from C:\Users\starrgw1\code\almds_prototype 2023-12-26 18:13:49,153 TRACE: 3.63 ms in collecting stages from C:\Users\starrgw1\code\almds_prototype\almds_dl 2023-12-26 18:13:49,154 TRACE: 676.60 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\almds_dl\almds_dl 2023-12-26 18:13:49,157 TRACE: 1.34 ms in collecting stages from C:\Users\starrgw1\code\almds_prototype\almds_dl\analysis 2023-12-26 18:13:49,165 TRACE: 6.35 ms in collecting stages from C:\Users\starrgw1\code\almds_prototype\almds_dl\analysis\experiments 2023-12-26 18:13:49,166 TRACE: 26.30 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\almds_dl\analysis\experiments\2dcnn 2023-12-26 18:13:49,167 TRACE: 786.90 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\almds_dl\analysis\experiments\brute_force 2023-12-26 18:13:49,169 TRACE: 932.80 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\almds_dl\analysis\experiments\brute_force_volume 2023-12-26 18:13:49,173 TRACE: 2.43 ms in collecting stages from C:\Users\starrgw1\code\almds_prototype\almds_dl\analysis\experiments\distill_shrink_cnn 2023-12-26 18:13:49,173 TRACE: 19.80 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\almds_dl\analysis\experiments\distill_shrink_cnn\exp2_random 2023-12-26 18:13:49,174 TRACE: 15.40 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\almds_dl\analysis\experiments\distill_shrink_cnn\exp3_grid 2023-12-26 18:13:49,176 TRACE: 15.40 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\almds_dl\analysis\experiments\distill_shrink_cnn\exp4_alphtemp 2023-12-26 18:13:49,176 TRACE: 15.70 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\almds_dl\analysis\experiments\dwsc_training 2023-12-26 18:13:49,177 TRACE: 8.60 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\almds_dl\analysis\experiments\efficient_net 2023-12-26 18:13:49,178 TRACE: 7.20 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\almds_dl\analysis\experiments\shrink 2023-12-26 18:13:49,179 TRACE: 17.70 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\almds_dl\analysis\experiments\training 2023-12-26 18:13:49,180 TRACE: 770.70 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\almds_dl\data 2023-12-26 18:13:49,181 TRACE: 2.20 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\almds_dl\evaluation_results_full 2023-12-26 18:13:49,185 TRACE: 3.60 ms in collecting stages from C:\Users\starrgw1\code\almds_prototype\almds_dl\models 2023-12-26 18:13:49,186 TRACE: 2.50 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\almds_dl\models\big 2023-12-26 18:13:49,187 TRACE: 2.10 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\almds_dl\models\ensemble 2023-12-26 18:13:49,187 TRACE: 2.10 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\almds_dl\models\little 2023-12-26 18:13:49,188 TRACE: 2.20 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\almds_dl\models\student 2023-12-26 18:13:49,189 TRACE: 2.00 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\almds_dl\models\twod 2023-12-26 18:13:49,191 TRACE: 1.98 ms in collecting stages from C:\Users\starrgw1\code\almds_prototype\almds_dl_matlab 2023-12-26 18:13:49,192 TRACE: 37.70 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\almds_dl_matlab\dataset creation 2023-12-26 18:13:49,193 TRACE: 75.60 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\almds_dl_matlab\truthing 2023-12-26 18:13:49,194 TRACE: 2.40 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\almds_dl_matlab\umap 2023-12-26 18:13:49,201 TRACE: 6.65 ms in collecting stages from C:\Users\starrgw1\code\almds_prototype\pipelines 2023-12-26 18:13:49,269 TRACE: pipelines\big\params.yaml does not exist, it won't be used in parametrization 2023-12-26 18:13:49,313 TRACE: Context during resolution of stage train@surface: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'big', 'verbose': 2, 'data_parms': 'H', 'item': 'surface'} 2023-12-26 18:13:49,317 TRACE: Context during resolution of stage train@volume: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'big', 'verbose': 2, 'data_parms': 'H', 'item': 'volume'} 2023-12-26 18:13:49,322 TRACE: Context during resolution of stage postprocess@surface: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'big', 'verbose': 2, 'data_parms': 'H', 'item': 'surface'} 2023-12-26 18:13:49,325 TRACE: Context during resolution of stage postprocess@volume: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'big', 'verbose': 2, 'data_parms': 'H', 'item': 'volume'} 2023-12-26 18:13:49,330 TRACE: Context during resolution of stage evaluate: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'big', 'verbose': 2, 'data_parms': 'H'} 2023-12-26 18:13:49,338 TRACE: 136.02 ms in collecting stages from C:\Users\starrgw1\code\almds_prototype\pipelines\big 2023-12-26 18:13:49,339 TRACE: 14.90 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\pipelines\big\models 2023-12-26 18:13:49,361 TRACE: pipelines\data\params.yaml does not exist, it won't be used in parametrization 2023-12-26 18:13:49,362 TRACE: Context during resolution of stage prepare_dataset@H: {'matlab_cmd': 'python submit_job.py matlab', 'python_cmd': 'python submit_job.py -m', 'item': 'H'} 2023-12-26 18:13:49,366 TRACE: Context during resolution of stage prepare_dataset@M: {'matlab_cmd': 'python submit_job.py matlab', 'python_cmd': 'python submit_job.py -m', 'item': 'M'} 2023-12-26 18:13:49,371 TRACE: Context during resolution of stage make_folds@H: {'matlab_cmd': 'python submit_job.py matlab', 'python_cmd': 'python submit_job.py -m', 'item': 'H'} 2023-12-26 18:13:49,373 TRACE: Context during resolution of stage make_folds@M: {'matlab_cmd': 'python submit_job.py matlab', 'python_cmd': 'python submit_job.py -m', 'item': 'M'} 2023-12-26 18:13:49,376 TRACE: Context during resolution of stage make_tfrecord@H-surface: {'matlab_cmd': 'python submit_job.py matlab', 'python_cmd': 'python submit_job.py -m', 'item': {'data_parms': 'H', 'dettype': 'surface'}} 2023-12-26 18:13:49,378 TRACE: Context during resolution of stage make_tfrecord@H-volume: {'matlab_cmd': 'python submit_job.py matlab', 'python_cmd': 'python submit_job.py -m', 'item': {'data_parms': 'H', 'dettype': 'volume'}} 2023-12-26 18:13:49,380 TRACE: Context during resolution of stage make_tfrecord@M-surface: {'matlab_cmd': 'python submit_job.py matlab', 'python_cmd': 'python submit_job.py -m', 'item': {'data_parms': 'M', 'dettype': 'surface'}} 2023-12-26 18:13:49,383 TRACE: Context during resolution of stage make_tfrecord@M-volume: {'matlab_cmd': 'python submit_job.py matlab', 'python_cmd': 'python submit_job.py -m', 'item': {'data_parms': 'M', 'dettype': 'volume'}} 2023-12-26 18:13:49,385 TRACE: 44.56 ms in collecting stages from C:\Users\starrgw1\code\almds_prototype\pipelines\data 2023-12-26 18:13:49,506 TRACE: pipelines\ensemble\params.yaml does not exist, it won't be used in parametrization 2023-12-26 18:13:49,513 TRACE: Context during resolution of stage train@0-surface: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 0, 'dettype': 'surface'}} 2023-12-26 18:13:49,517 TRACE: Context during resolution of stage train@0-volume: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 0, 'dettype': 'volume'}} 2023-12-26 18:13:49,520 TRACE: Context during resolution of stage train@1-surface: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 1, 'dettype': 'surface'}} 2023-12-26 18:13:49,524 TRACE: Context during resolution of stage train@1-volume: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 1, 'dettype': 'volume'}} 2023-12-26 18:13:49,542 TRACE: Context during resolution of stage train@2-surface: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 2, 'dettype': 'surface'}} 2023-12-26 18:13:49,546 TRACE: Context during resolution of stage train@2-volume: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 2, 'dettype': 'volume'}} 2023-12-26 18:13:49,549 TRACE: Context during resolution of stage train@3-surface: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 3, 'dettype': 'surface'}} 2023-12-26 18:13:49,554 TRACE: Context during resolution of stage train@3-volume: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 3, 'dettype': 'volume'}} 2023-12-26 18:13:49,558 TRACE: Context during resolution of stage train@4-surface: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 4, 'dettype': 'surface'}} 2023-12-26 18:13:49,562 TRACE: Context during resolution of stage train@4-volume: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 4, 'dettype': 'volume'}} 2023-12-26 18:13:49,565 TRACE: Context during resolution of stage train@5-surface: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 5, 'dettype': 'surface'}} 2023-12-26 18:13:49,569 TRACE: Context during resolution of stage train@5-volume: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 5, 'dettype': 'volume'}} 2023-12-26 18:13:49,571 TRACE: Context during resolution of stage train@6-surface: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 6, 'dettype': 'surface'}} 2023-12-26 18:13:49,575 TRACE: Context during resolution of stage train@6-volume: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 6, 'dettype': 'volume'}} 2023-12-26 18:13:49,579 TRACE: Context during resolution of stage train@7-surface: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 7, 'dettype': 'surface'}} 2023-12-26 18:13:49,582 TRACE: Context during resolution of stage train@7-volume: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 7, 'dettype': 'volume'}} 2023-12-26 18:13:49,586 TRACE: Context during resolution of stage train@8-surface: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 8, 'dettype': 'surface'}} 2023-12-26 18:13:49,590 TRACE: Context during resolution of stage train@8-volume: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 8, 'dettype': 'volume'}} 2023-12-26 18:13:49,595 TRACE: Context during resolution of stage train@9-surface: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 9, 'dettype': 'surface'}} 2023-12-26 18:13:49,600 TRACE: Context during resolution of stage train@9-volume: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 9, 'dettype': 'volume'}} 2023-12-26 18:13:49,605 TRACE: Context during resolution of stage evaluate_individual@0: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': 0} 2023-12-26 18:13:49,609 TRACE: Context during resolution of stage evaluate_individual@1: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': 1} 2023-12-26 18:13:49,612 TRACE: Context during resolution of stage evaluate_individual@2: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': 2} 2023-12-26 18:13:49,616 TRACE: Context during resolution of stage evaluate_individual@3: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': 3} 2023-12-26 18:13:49,619 TRACE: Context during resolution of stage evaluate_individual@4: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': 4} 2023-12-26 18:13:49,622 TRACE: Context during resolution of stage evaluate_individual@5: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': 5} 2023-12-26 18:13:49,625 TRACE: Context during resolution of stage evaluate_individual@6: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': 6} 2023-12-26 18:13:49,628 TRACE: Context during resolution of stage evaluate_individual@7: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': 7} 2023-12-26 18:13:49,631 TRACE: Context during resolution of stage evaluate_individual@8: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': 8} 2023-12-26 18:13:49,633 TRACE: Context during resolution of stage evaluate_individual@9: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': 9} 2023-12-26 18:13:49,639 TRACE: Context during resolution of stage postprocess@surface: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': 'surface'} 2023-12-26 18:13:49,644 TRACE: Context during resolution of stage postprocess@volume: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': 'volume'} 2023-12-26 18:13:49,650 TRACE: Context during resolution of stage evaluate: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H'} 2023-12-26 18:13:49,658 TRACE: 272.35 ms in collecting stages from C:\Users\starrgw1\code\almds_prototype\pipelines\ensemble 2023-12-26 18:13:49,669 TRACE: 9.54 ms in collecting stages from C:\Users\starrgw1\code\almds_prototype\pipelines\ensemble\components 2023-12-26 18:13:49,671 TRACE: 1.28 ms in collecting stages from C:\Users\starrgw1\code\almds_prototype\pipelines\ensemble\components\c0 2023-12-26 18:13:49,672 TRACE: 9.40 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\pipelines\ensemble\components\c0\models 2023-12-26 18:13:49,674 TRACE: 1.04 ms in collecting stages from C:\Users\starrgw1\code\almds_prototype\pipelines\ensemble\components\c1 2023-12-26 18:13:49,674 TRACE: 8.30 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\pipelines\ensemble\components\c1\models 2023-12-26 18:13:49,676 TRACE: 992.70 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\pipelines\ensemble\components\c2 2023-12-26 18:13:49,678 TRACE: 27.50 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\pipelines\ensemble\components\c2\models 2023-12-26 18:13:49,680 TRACE: 1.21 ms in collecting stages from C:\Users\starrgw1\code\almds_prototype\pipelines\ensemble\components\c3 2023-12-26 18:13:49,681 TRACE: 10.80 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\pipelines\ensemble\components\c3\models 2023-12-26 18:13:49,682 TRACE: 1.08 ms in collecting stages from C:\Users\starrgw1\code\almds_prototype\pipelines\ensemble\components\c4 2023-12-26 18:13:49,683 TRACE: 7.50 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\pipelines\ensemble\components\c4\models 2023-12-26 18:13:49,685 TRACE: 972.60 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\pipelines\ensemble\components\c5 2023-12-26 18:13:49,686 TRACE: 6.80 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\pipelines\ensemble\components\c5\models 2023-12-26 18:13:49,687 TRACE: 943.00 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\pipelines\ensemble\components\c6 2023-12-26 18:13:49,688 TRACE: 6.80 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\pipelines\ensemble\components\c6\models 2023-12-26 18:13:49,690 TRACE: 942.30 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\pipelines\ensemble\components\c7 2023-12-26 18:13:49,690 TRACE: 6.70 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\pipelines\ensemble\components\c7\models 2023-12-26 18:13:49,692 TRACE: 1.11 ms in collecting stages from C:\Users\starrgw1\code\almds_prototype\pipelines\ensemble\components\c8 2023-12-26 18:13:49,693 TRACE: 8.20 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\pipelines\ensemble\components\c8\models 2023-12-26 18:13:49,695 TRACE: 1.05 ms in collecting stages from C:\Users\starrgw1\code\almds_prototype\pipelines\ensemble\components\c9 2023-12-26 18:13:49,696 TRACE: 11.80 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\pipelines\ensemble\components\c9\models 2023-12-26 18:13:49,696 TRACE: 7.00 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\pipelines\ensemble\models 2023-12-26 18:13:49,715 TRACE: pipelines\little\params.yaml does not exist, it won't be used in parametrization 2023-12-26 18:13:49,719 TRACE: Context during resolution of stage train@surface: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'little', 'verbose': 2, 'data_parms': 'H', 'item': 'surface'} 2023-12-26 18:13:49,722 TRACE: Context during resolution of stage train@volume: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'little', 'verbose': 2, 'data_parms': 'H', 'item': 'volume'} 2023-12-26 18:13:49,726 TRACE: Context during resolution of stage postprocess@surface: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'little', 'verbose': 2, 'data_parms': 'H', 'item': 'surface'} 2023-12-26 18:13:49,729 TRACE: Context during resolution of stage postprocess@volume: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'little', 'verbose': 2, 'data_parms': 'H', 'item': 'volume'} 2023-12-26 18:13:49,733 TRACE: Context during resolution of stage evaluate: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'little', 'verbose': 2, 'data_parms': 'H'} 2023-12-26 18:13:49,740 TRACE: 43.03 ms in collecting stages from C:\Users\starrgw1\code\almds_prototype\pipelines\little 2023-12-26 18:13:49,741 TRACE: 19.50 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\pipelines\little\models 2023-12-26 18:13:49,756 TRACE: pipelines\shrink\params.yaml does not exist, it won't be used in parametrization 2023-12-26 18:13:49,762 TRACE: Context during resolution of stage train@surface: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'shrink', 'verbose': 2, 'data_parms': 'H', 'item': 'surface'} 2023-12-26 18:13:49,766 TRACE: Context during resolution of stage train@volume: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'shrink', 'verbose': 2, 'data_parms': 'H', 'item': 'volume'} 2023-12-26 18:13:49,770 TRACE: Context during resolution of stage postprocess@surface: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'shrink', 'verbose': 2, 'data_parms': 'H', 'item': 'surface'} 2023-12-26 18:13:49,771 DEBUG: Lockfile 'pipelines\shrink\dvc.lock' needs to be updated. 2023-12-26 18:13:49,771 TRACE: No lock entry found for 'pipelines\shrink\dvc.yaml:postprocess@surface' 2023-12-26 18:13:49,774 TRACE: Context during resolution of stage postprocess@volume: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'shrink', 'verbose': 2, 'data_parms': 'H', 'item': 'volume'} 2023-12-26 18:13:49,775 TRACE: No lock entry found for 'pipelines\shrink\dvc.yaml:postprocess@volume' 2023-12-26 18:13:49,780 TRACE: Context during resolution of stage evaluate: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'shrink', 'verbose': 2, 'data_parms': 'H'} 2023-12-26 18:13:49,787 TRACE: 44.44 ms in collecting stages from C:\Users\starrgw1\code\almds_prototype\pipelines\shrink 2023-12-26 18:13:49,788 TRACE: 7.10 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\pipelines\shrink\models 2023-12-26 18:13:49,807 TRACE: pipelines\student\params.yaml does not exist, it won't be used in parametrization 2023-12-26 18:13:49,814 TRACE: Context during resolution of stage train@surface: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'student', 'teacher_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'teacher': 'ensemble', 'item': 'surface'} 2023-12-26 18:13:49,817 TRACE: Context during resolution of stage train@volume: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'student', 'teacher_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'teacher': 'ensemble', 'item': 'volume'} 2023-12-26 18:13:49,822 TRACE: Context during resolution of stage postprocess@surface: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'student', 'teacher_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'teacher': 'ensemble', 'item': 'surface'} 2023-12-26 18:13:49,826 TRACE: Context during resolution of stage postprocess@volume: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'student', 'teacher_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'teacher': 'ensemble', 'item': 'volume'} 2023-12-26 18:13:49,831 TRACE: Context during resolution of stage evaluate: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'student', 'teacher_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'teacher': 'ensemble'} 2023-12-26 18:13:49,841 TRACE: 52.55 ms in collecting stages from C:\Users\starrgw1\code\almds_prototype\pipelines\student 2023-12-26 18:13:49,842 TRACE: 20.80 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\pipelines\student\models 2023-12-26 18:13:49,863 TRACE: pipelines\twod\params.yaml does not exist, it won't be used in parametrization 2023-12-26 18:13:49,867 TRACE: Context during resolution of stage train@surface: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'twod', 'verbose': 2, 'data_parms': 'H', 'item': 'surface'} 2023-12-26 18:13:49,871 TRACE: Context during resolution of stage train@volume: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'twod', 'verbose': 2, 'data_parms': 'H', 'item': 'volume'} 2023-12-26 18:13:49,877 TRACE: Context during resolution of stage postprocess@surface: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'twod', 'verbose': 2, 'data_parms': 'H', 'item': 'surface'} 2023-12-26 18:13:49,880 TRACE: Context during resolution of stage postprocess@volume: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'twod', 'verbose': 2, 'data_parms': 'H', 'item': 'volume'} 2023-12-26 18:13:49,884 TRACE: Context during resolution of stage evaluate: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'twod', 'verbose': 2, 'data_parms': 'H'} 2023-12-26 18:13:49,892 TRACE: 48.48 ms in collecting stages from C:\Users\starrgw1\code\almds_prototype\pipelines\twod 2023-12-26 18:13:49,892 TRACE: 18.50 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\pipelines\twod\models 2023-12-26 18:13:49,893 TRACE: 22.40 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\scripts 2023-12-26 18:13:49,894 TRACE: 16.00 mks in collecting stages from C:\Users\starrgw1\code\almds_prototype\templates 2023-12-26 18:13:50,009 TRACE: pipelines\ensemble\params.yaml does not exist, it won't be used in parametrization 2023-12-26 18:13:50,039 TRACE: Context during resolution of stage train@0-surface: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 0, 'dettype': 'surface'}} 2023-12-26 18:13:50,044 TRACE: Context during resolution of stage train@0-volume: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 0, 'dettype': 'volume'}} 2023-12-26 18:13:50,048 TRACE: Context during resolution of stage train@1-surface: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 1, 'dettype': 'surface'}} 2023-12-26 18:13:50,051 TRACE: Context during resolution of stage train@1-volume: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 1, 'dettype': 'volume'}} 2023-12-26 18:13:50,055 TRACE: Context during resolution of stage train@2-surface: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 2, 'dettype': 'surface'}} 2023-12-26 18:13:50,059 TRACE: Context during resolution of stage train@2-volume: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 2, 'dettype': 'volume'}} 2023-12-26 18:13:50,063 TRACE: Context during resolution of stage train@3-surface: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 3, 'dettype': 'surface'}} 2023-12-26 18:13:50,067 TRACE: Context during resolution of stage train@3-volume: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 3, 'dettype': 'volume'}} 2023-12-26 18:13:50,071 TRACE: Context during resolution of stage train@4-surface: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 4, 'dettype': 'surface'}} 2023-12-26 18:13:50,074 TRACE: Context during resolution of stage train@4-volume: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 4, 'dettype': 'volume'}} 2023-12-26 18:13:50,078 TRACE: Context during resolution of stage train@5-surface: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 5, 'dettype': 'surface'}} 2023-12-26 18:13:50,082 TRACE: Context during resolution of stage train@5-volume: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 5, 'dettype': 'volume'}} 2023-12-26 18:13:50,085 TRACE: Context during resolution of stage train@6-surface: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 6, 'dettype': 'surface'}} 2023-12-26 18:13:50,089 TRACE: Context during resolution of stage train@6-volume: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 6, 'dettype': 'volume'}} 2023-12-26 18:13:50,093 TRACE: Context during resolution of stage train@7-surface: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 7, 'dettype': 'surface'}} 2023-12-26 18:13:50,097 TRACE: Context during resolution of stage train@7-volume: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 7, 'dettype': 'volume'}} 2023-12-26 18:13:50,101 TRACE: Context during resolution of stage train@8-surface: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 8, 'dettype': 'surface'}} 2023-12-26 18:13:50,104 TRACE: Context during resolution of stage train@8-volume: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 8, 'dettype': 'volume'}} 2023-12-26 18:13:50,108 TRACE: Context during resolution of stage train@9-surface: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 9, 'dettype': 'surface'}} 2023-12-26 18:13:50,112 TRACE: Context during resolution of stage train@9-volume: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': {'num': 9, 'dettype': 'volume'}} 2023-12-26 18:13:50,117 TRACE: Context during resolution of stage evaluate_individual@0: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': 0} 2023-12-26 18:13:50,121 TRACE: Context during resolution of stage evaluate_individual@1: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': 1} 2023-12-26 18:13:50,124 TRACE: Context during resolution of stage evaluate_individual@2: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': 2} 2023-12-26 18:13:50,128 TRACE: Context during resolution of stage evaluate_individual@3: 2023-12-26 18:13:50,144 TRACE: Context during resolution of stage evaluate_individual@8: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': 8} 2023-12-26 18:13:50,147 TRACE: Context during resolution of stage evaluate_individual@9: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': 9} 2023-12-26 18:13:50,153 TRACE: Context during resolution of stage postprocess@surface: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': 'surface'} 2023-12-26 18:13:50,160 TRACE: Context during resolution of stage postprocess@volume: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H', 'item': 'volume'} 2023-12-26 18:13:50,167 TRACE: Context during resolution of stage evaluate: {'python_cmd': 'python submit_job.py -m', 'pipeline_name': 'ensemble', 'verbose': 2, 'data_parms': 'H'} Collecting |0.00 [00:00, ?entry/s] 2023-12-26 18:13:50,510 DEBUG: failed to load ('54', 'd96d9a81e71aa978fe780b2b3133dc.dir') from storage local (\\nstd-delores\DeepMine\ALMDS\remote\files\md5) - [Errno 2] No such fil e or directory: '\\\\nstd-delores\\DeepMine\\ALMDS\\remote\\files\\md5\\54\\d96d9a81e71aa978fe780b2b3133dc.dir' Traceback (most recent call last): File "C:\Users\starrgw1\Anaconda3\envs\almds\lib\site-packages\dvc_data\index\index.py", line 552, in _load_from_storage _load_from_object_storage(trie, entry, storage) File "C:\Users\starrgw1\Anaconda3\envs\almds\lib\site-packages\dvc_data\index\index.py", line 488, in _load_from_object_storage obj = Tree.load(storage.odb, root_entry.hash_info, hash_name=storage.odb.hash_name) File "C:\Users\starrgw1\Anaconda3\envs\almds\lib\site-packages\dvc_data\hashfile\tree.py", line 193, in load with obj.fs.open(obj.path, "r") as fobj: File "C:\Users\starrgw1\Anaconda3\envs\almds\lib\site-packages\dvc_objects\fs\base.py", line 228, in open return self.fs.open(path, mode=mode, **kwargs) File "C:\Users\starrgw1\Anaconda3\envs\almds\lib\site-packages\dvc_objects\fs\local.py", line 138, in open return open(path, mode=mode, encoding=encoding) FileNotFoundError: [Errno 2] No such file or directory: '\\\\nstd-delores\\DeepMine\\ALMDS\\remote\\files\\md5\\54\\d96d9a81e71aa978fe780b2b3133dc.dir' 2023-12-26 18:13:50,518 DEBUG: failed to load ('54', 'd96d9a81e71aa978fe780b2b3133dc.dir') from storage local (C:\Users\starrgw1\code\almds_prototype\.dvc\cache\files\md5) - [Errno 2 ] No such file or directory: 'C:\\Users\\starrgw1\\code\\almds_prototype\\.dvc\\cache\\files\\md5\\54\\d96d9a81e71aa978fe780b2b3133dc.dir' Traceback (most recent call last): File "C:\Users\starrgw1\Anaconda3\envs\almds\lib\site-packages\dvc_data\index\index.py", line 552, in _load_from_storage _load_from_object_storage(trie, entry, storage) File "C:\Users\starrgw1\Anaconda3\envs\almds\lib\site-packages\dvc_data\index\index.py", line 488, in _load_from_object_storage obj = Tree.load(storage.odb, root_entry.hash_info, hash_name=storage.odb.hash_name) File "C:\Users\starrgw1\Anaconda3\envs\almds\lib\site-packages\dvc_data\hashfile\tree.py", line 193, in load with obj.fs.open(obj.path, "r") as fobj: File "C:\Users\starrgw1\Anaconda3\envs\almds\lib\site-packages\dvc_objects\fs\base.py", line 228, in open return self.fs.open(path, mode=mode, **kwargs) File "C:\Users\starrgw1\Anaconda3\envs\almds\lib\site-packages\dvc_objects\fs\local.py", line 138, in open return open(path, mode=mode, encoding=encoding) FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\starrgw1\\code\\almds_prototype\\.dvc\\cache\\files\\md5\\54\\d96d9a81e71aa978fe780b2b3133dc.dir' Fetching 2023-12-26 18:13:50,530 ERROR: unexpected error - failed to load directory ('54', 'd96d9a81e71aa978fe780b2b3133dc.dir'): [Errno 2] No such file or directory: 'C:\\Users\\starrgw1\\co de\\almds_prototype\\.dvc\\cache\\files\\md5\\54\\d96d9a81e71aa978fe780b2b3133dc.dir' Traceback (most recent call last): File "C:\Users\starrgw1\Anaconda3\envs\almds\lib\site-packages\dvc_data\index\index.py", line 552, in _load_from_storage _load_from_object_storage(trie, entry, storage) File "C:\Users\starrgw1\Anaconda3\envs\almds\lib\site-packages\dvc_data\index\index.py", line 488, in _load_from_object_storage obj = Tree.load(storage.odb, root_entry.hash_info, hash_name=storage.odb.hash_name) File "C:\Users\starrgw1\Anaconda3\envs\almds\lib\site-packages\dvc_data\hashfile\tree.py", line 193, in load with obj.fs.open(obj.path, "r") as fobj: File "C:\Users\starrgw1\Anaconda3\envs\almds\lib\site-packages\dvc_objects\fs\base.py", line 228, in open return self.fs.open(path, mode=mode, **kwargs) File "C:\Users\starrgw1\Anaconda3\envs\almds\lib\site-packages\dvc_objects\fs\local.py", line 138, in open return open(path, mode=mode, encoding=encoding) FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\starrgw1\\code\\almds_prototype\\.dvc\\cache\\files\\md5\\54\\d96d9a81e71aa978fe780b2b3133dc.dir' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "C:\Users\starrgw1\Anaconda3\envs\almds\lib\site-packages\dvc\cli\__init__.py", line 209, in main ret = cmd.do_run() File "C:\Users\starrgw1\Anaconda3\envs\almds\lib\site-packages\dvc\cli\command.py", line 26, in do_run return self.run() File "C:\Users\starrgw1\Anaconda3\envs\almds\lib\site-packages\dvc\commands\data_sync.py", line 35, in run stats = self.repo.pull( File "C:\Users\starrgw1\Anaconda3\envs\almds\lib\site-packages\dvc\repo\__init__.py", line 61, in wrapper return f(repo, *args, **kwargs) fetch_transferred, fetch_failed = ifetch( File "C:\Users\starrgw1\Anaconda3\envs\almds\lib\site-packages\dvc_data\index\fetch.py", line 65, in fetch [ File "C:\Users\starrgw1\Anaconda3\envs\almds\lib\site-packages\dvc_data\index\fetch.py", line 65, in <listcomp> [ File "C:\Users\starrgw1\Anaconda3\envs\almds\lib\site-packages\dvc_data\index\index.py", line 691, in iteritems self._load(key, entry) File "C:\Users\starrgw1\Anaconda3\envs\almds\lib\site-packages\dvc_data\index\index.py", line 647, in _load self.onerror(entry, exc) File "C:\Users\starrgw1\Anaconda3\envs\almds\lib\site-packages\dvc_data\index\index.py", line 579, in _onerror raise exc File "C:\Users\starrgw1\Anaconda3\envs\almds\lib\site-packages\dvc_data\index\index.py", line 645, in _load _load_from_storage(self._trie, entry, self.storage_map[key]) File "C:\Users\starrgw1\Anaconda3\envs\almds\lib\site-packages\dvc_data\index\index.py", line 567, in _load_from_storage raise DataIndexDirError(f"failed to load directory {entry.key}") from last_exc dvc_data.index.index.DataIndexDirError: failed to load directory ('54', 'd96d9a81e71aa978fe780b2b3133dc.dir') 2023-12-26 18:13:50,748 DEBUG: link type reflink is not available ([Errno 129] no more link types left to try out) 2023-12-26 18:13:50,749 DEBUG: Removing 'C:\Users\starrgw1\code\.7YGkoZEkY7C6kcH7VcJ84d.tmp' 2023-12-26 18:13:50,752 DEBUG: Removing 'C:\Users\starrgw1\code\.7YGkoZEkY7C6kcH7VcJ84d.tmp' 2023-12-26 18:13:50,764 DEBUG: link type symlink is not available ([Errno 129] no more link types left to try out) 2023-12-26 18:13:50,764 DEBUG: Removing 'C:\Users\starrgw1\code\.7YGkoZEkY7C6kcH7VcJ84d.tmp' 2023-12-26 18:13:50,765 DEBUG: Removing 'C:\Users\starrgw1\code\almds_prototype\.dvc\cache\files\md5\.J8reJGhkmPrahTqp54n7cU.tmp' 2023-12-26 18:13:54,595 DEBUG: Version info for developers: DVC version: 3.26.2 (pip) ------------------------- Platform: Python 3.10.13 on Windows-10-10.0.19045-SP0 Subprojects: dvc_data = 2.18.2 dvc_objects = 1.2.0 dvc_render = 0.6.0 dvc_task = 0.3.0 scmrepo = 1.5.0 Supports: http (aiohttp = 3.9.1, aiohttp-retry = 2.8.3), https (aiohttp = 3.9.1, aiohttp-retry = 2.8.3) Config: Global: C:\Users\starrgw1\AppData\Local\iterative\dvc System: C:\ProgramData\iterative\dvc Cache types: hardlink Cache directory: NTFS on C:\ Caches: local Remotes: local Workspace directory: NTFS on C:\ Repo: dvc, git Repo.site_cache_dir: C:\ProgramData\iterative\dvc\Cache\repo\dff4f3f61607d77a9a2addbd05e75dbf Having any troubles? Hit us up at https://dvc.org/support, we are always happy to help! 2023-12-26 18:13:54,599 DEBUG: Analytics is enabled. 2023-12-26 18:13:54,744 DEBUG: Trying to spawn '['daemon', '-q', 'analytics', 'C:\\Users\\starrgw1\\AppData\\Local\\Temp\\tmpbnn6l9xn']' 2023-12-26 18:13:54,756 DEBUG: Spawned '['daemon', '-q', 'analytics', 'C:\\Users\\starrgw1\\AppData\\Local\\Temp\\tmpbnn6l9xn']' ``` ### Expected downloads my stuff, doesn't stop for errors ### Environment information ```dvc doctor DVC version: 3.26.2 (pip) ------------------------- Platform: Python 3.10.13 on Windows-10-10.0.19045-SP0 Subprojects: dvc_data = 2.18.2 dvc_objects = 1.2.0 dvc_render = 0.6.0 dvc_task = 0.3.0 scmrepo = 1.5.0 Supports: http (aiohttp = 3.9.1, aiohttp-retry = 2.8.3), https (aiohttp = 3.9.1, aiohttp-retry = 2.8.3) Config: Global: C:\Users\starrgw1\AppData\Local\iterative\dvc System: C:\ProgramData\iterative\dvc Cache types: hardlink Cache directory: NTFS on C:\ Caches: local Remotes: local Workspace directory: NTFS on C:\ Repo: dvc, git Repo.site_cache_dir: C:\ProgramData\iterative\dvc\Cache\repo\dff4f3f61607d77a9a2addbd05e75dbf ```
closed
2023-12-26T23:16:17Z
2024-03-25T10:47:35Z
https://github.com/iterative/dvc/issues/10202
[ "bug", "awaiting response", "research", "A: data-sync" ]
gregstarr
3
trevorstephens/gplearn
scikit-learn
2
hangs when saving to pickle
Hi, Thanks for creating this wonderful tool. I was wondering if saving models using pickle is supported? When i issue the following command: gp = SymbolicRegressor() trained_models = {} trained_models['gp'] = pickle.dumps(gp) my IDE hangs. This usually works when I am using other sklearn models. I am using the following: Python 2.7.10 |Anaconda 2.3.0 (64-bit)| (default, Oct 19 2015, 18:04:42) Type "copyright", "credits" or "license" for more information. IPython 4.0.0 -- An enhanced Interactive Python. Thank you.
closed
2015-12-02T04:31:26Z
2015-12-16T23:14:32Z
https://github.com/trevorstephens/gplearn/issues/2
[]
hasakura511
2
hzwer/ECCV2022-RIFE
computer-vision
75
optical flow labels
I generate optical flow labels use pytorch-liteflownet on 2X size images。but when training , loss could not converge。 my code: image_0_up = numpy.array(img_0.resize((width * 2, height * 2), Image.ANTIALIAS))[:, :, ::-1] image_t_up = numpy.array(img_t.resize((width * 2, height * 2), Image.ANTIALIAS))[:, :, ::-1] ... out_flow = netNetwork(tenPreprocessedFirst, tenPreprocessedSecond)
closed
2020-12-21T02:52:57Z
2021-02-12T02:34:44Z
https://github.com/hzwer/ECCV2022-RIFE/issues/75
[]
zoeysgithub
8
strawberry-graphql/strawberry
django
3,784
Please use python >= 3.9 instead of ^3.9
<!--- Provide a general summary of the changes you want in the title above. --> Current pyproject.toml has: https://github.com/strawberry-graphql/strawberry/blob/951e56c5885cb9516ac53118b389d3b2cb47c52f/pyproject.toml#L30-L31 Which actually means ">=3.10,<4.0", while <4.0 means nothing. But it cause any other project based on it must add this to their pyproject.toml for environment checking, otherwise error below will raise: ``` Using version ^0.260.2 for strawberry-graphql Updating dependencies Resolving dependencies... (2.3s) The current project's supported Python range (>=3.10) is not compatible with some of the required packages Python requirement: - strawberry-graphql requires Python >=3.9,<4.0, so it will not be installable for Python >=4.0 Because no versions of strawberry-graphql match >0.260.2,<0.261.0 and strawberry-graphql[debug-server] (0.260.2) requires Python >=3.9,<4.0, strawberry-graphql is forbidden. So, because amdata-ic-server depends on strawberry-graphql[debug-server] (^0.260.2), version solving failed. * Check your dependencies Python requirement: The Python requirement can be specified via the `python` or `markers` properties For strawberry-graphql, a possible solution would be to set the `python` property to ">=3.10,<4.0" https://python-poetry.org/docs/dependency-specification/#python-restricted-dependencies, https://python-poetry.org/docs/dependency-specification/#using-environment-markers ``` I think it is a bad behavior and should be changed. <!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
closed
2025-02-17T06:33:45Z
2025-03-03T00:48:18Z
https://github.com/strawberry-graphql/strawberry/issues/3784
[]
PaleNeutron
4
deepfakes/faceswap
machine-learning
1,480
小白下载,帮我看看是什么问题中止的么!
小白下载,帮我看看是什么问题中止的么! ![Image](https://github.com/user-attachments/assets/fe419519-c5c6-4fea-8c5e-be6bc32fa0e7)
open
2025-01-29T16:56:57Z
2025-02-04T01:29:49Z
https://github.com/deepfakes/faceswap/issues/1480
[]
ZJ-LUCAS
1
deezer/spleeter
deep-learning
558
What does this mean?
I'm not a programmer or anything so this will probably come across as an obvious question. I have gotten everything installed and I'm a femtometer away from getting spleeter to work. But what does this mean?? _FILES... List of input audio file path [required]_ I seem to be using the other commands properly I just have no idea what to type in for that. every time I try anything it says: _Error: Invalid value for 'FILES...': File 'audio_example.mp3' does not exist._ Anything helps I'm so close to getting this I think It hurts
closed
2021-01-13T02:25:49Z
2021-01-29T13:28:20Z
https://github.com/deezer/spleeter/issues/558
[ "question" ]
Laikanoosh
1
sebastianruder/NLP-progress
machine-learning
4
split the file into files
I suspect editing these will become unwieldy quickly, up to you, just a thought
closed
2018-06-23T17:39:32Z
2018-06-24T12:32:01Z
https://github.com/sebastianruder/NLP-progress/issues/4
[]
ybisk
4
gee-community/geemap
streamlit
609
geemap istalling issue
I have created gee virtual env using conda create -n gee python then activate the env gee by conda activate gee after that we install the geemap package by conda-forge channel using conda install -c conda-forge geemap after that dependecies starting downloading and installing. during the finishing of installing but have getting this issue please help m2w64-libwinpthread- | 31 KB | ############################################################################ | 100% sniffio-1.2.0 | 16 KB | ############################################################################ | 100% Preparing transaction: done Verifying transaction: done Executing transaction: / Enabling notebook extension jupyter-js-widgets/extension... - Validating: ok done ERROR conda.core.link:_execute(701): An error occurred while installing package 'conda-forge::qt-5.12.9-h5909a2a_4'. Rolling back transaction: done LinkError: post-link script failed for package conda-forge::qt-5.12.9-h5909a2a_4 location of failed script: C:\Users\Lenovo\Anaconda3\envs\gee\Scripts\.qt-post-link.bat ==> script messages <== <None> ==> script output <== stdout: 1 file(s) copied. stderr: 'chcp' is not recognized as an internal or external command, operable program or batch file. 'chcp' is not recognized as an internal or external command, operable program or batch file. 'chcp' is not recognized as an internal or external command, operable program or batch file. return code: 1 ()
closed
2021-08-06T07:42:25Z
2021-08-08T17:49:22Z
https://github.com/gee-community/geemap/issues/609
[]
dileep0
5
ets-labs/python-dependency-injector
flask
40
Research of Visio
File formats: .vsd, .vsx, .vsdx
closed
2015-03-26T23:12:39Z
2020-06-29T20:57:10Z
https://github.com/ets-labs/python-dependency-injector/issues/40
[ "research" ]
rmk135
1