repo_name
stringlengths
9
75
topic
stringclasses
30 values
issue_number
int64
1
203k
title
stringlengths
1
976
body
stringlengths
0
254k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
url
stringlengths
38
105
labels
listlengths
0
9
user_login
stringlengths
1
39
comments_count
int64
0
452
biosustain/potion
sqlalchemy
158
Use of pattern_properties is not clear
I want to define an `Object` with `patternProperties`, basically to return a schema-free dict. Looking at the doc it seems you'd do this with `fields.Object(pattern_properties={'.*': MyValueSchema})`, but this just returns a dict with `".*"` as key. What's the correct way of defining an object with `patternProperties`?
closed
2018-12-07T09:42:23Z
2019-01-10T11:37:19Z
https://github.com/biosustain/potion/issues/158
[]
albertodonato
8
jonaswinkler/paperless-ng
django
1,573
[BUG] Progression bar stuck
**Describe the bug** Whenever I try to upload a document from the web interface, the progression bar gets stuck at "Upload complete, waiting...". No status is reported beyond that, whether it's a success or an error, with the only exception being when the file type is not supported. **To Reproduce** 1. Go to the web interface 2. Upload a supported file with "Upload new documents" 3. Observe the progression bar not going further than "Upload complete, waiting..." **Expected behavior** The progression bar reflecting the current status of the consumer/worker. **Screenshots** ![image](https://user-images.githubusercontent.com/28761304/150690215-5ad1784a-dbc0-48a9-88e1-4b3f77f845fb.png) **Webserver logs** ``` (...) 18:16:27 [Q] INFO Process-1:15 processing [CV.pdf] 18:16:27 [Q] INFO Enqueued 1 [2022-01-23 18:16:30,209] [INFO] [paperless.consumer] Consuming CV.pdf [2022-01-23 18:16:58,588] [INFO] [paperless.consumer] Document 2001-09-03 CV consumption finished 18:16:58 [Q] INFO Process-1:15 stopped doing work 18:16:58 [Q] INFO recycled worker Process-1:15 18:16:58 [Q] INFO Process-1:17 ready for work at 1835 18:16:59 [Q] INFO Processed [CV.pdf] (...) ``` **Relevant information** - Paperless-ng 1.5.0 on Docker 20.10.3, on a Synology NAS (DS218+ – DSM 7.0.1) - Tried on Firefox 96.0.2 and Brave 1.34.81 - Tried with and without my reverse proxy <details> <summary>docker-compose.yml</summary> ```yml version: "3.8" services: paperless-ng: image: jonaswinkler/paperless-ng container_name: Paperless depends_on: - redis - gotenberg - tika environment: USERMAP_UID: <redacted> USERMAP_GID: 100 TZ: Europe/Brussels PAPERLESS_TIME_ZONE: Europe/Brussels PAPERLESS_REDIS: redis://redis:6379 PAPERLESS_SECRET_KEY: <redacted> PAPERLESS_OCR_LANGUAGES: fra eng PAPERLESS_OCR_LANGUAGE: fra+eng PAPERLESS_OCR_ROTATE_PAGES_THRESHOLD: 9 PAPERLESS_TIKA_ENABLED: 1 PAPERLESS_TIKA_GOTENBERG_ENDPOINT: http://gotenberg:3000 PAPERLESS_TIKA_ENDPOINT: http://tika:9998 volumes: - <redacted> ports: - 8100:8000 healthcheck: test: ["CMD", "curl", "-f", "http://<redacted>:8100"] interval: 30s timeout: 10s retries: 5 restart: unless-stopped networks: - paperless-ng redis: image: redis:4-alpine container_name: Paperless-Redis environment: - TZ=Europe/Brussels command: redis-server --bind redis --port 6379 --maxmemory 64M --maxmemory-policy volatile-lru restart: unless-stopped networks: - paperless-ng gotenberg: image: thecodingmachine/gotenberg:6 container_name: Paperless-Gotenberg environment: TZ: Europe/Brussels DISABLE_GOOGLE_CHROME: 1 MAXIMUM_WAIT_TIMEOUT: 300.0 DEFAULT_WAIT_TIMEOUT: 300.0 command: gotenberg --api-port=3000 --api-process-timeout=300 --log-level=debug restart: unless-stopped networks: - paperless-ng tika: image: apache/tika:1.27-full container_name: Paperless-Tika environment: TZ: Europe/Brussels restart: unless-stopped networks: - paperless-ng networks: paperless-ng: name: paperless-ng ``` </details>
open
2022-01-23T17:36:31Z
2023-02-09T08:27:26Z
https://github.com/jonaswinkler/paperless-ng/issues/1573
[]
Keiishu
10
PaddlePaddle/models
computer-vision
4,962
我在用自己数据集调试video tag的attention lstm模型时遇到点问题...
我的数据集有8类,也就是8个标签,我想利用你们开源的模型进行预训练,当我把configs/attention_lstm-single.yaml中MODEL下面的num_classes改成8的时候,报错如下: AssertionError: Parameter's shape does not match, the Program requires a parameter with the shape of ((4096, 8)), while the loaded parameter (namely [ output.w_0 ]) has a shape of ((4096, 3396)). 如果不使用预训练模型,可以跑完训练。请问我想是用你们的预训练模型的话,怎么解决?
closed
2020-11-19T11:51:54Z
2020-11-20T07:02:06Z
https://github.com/PaddlePaddle/models/issues/4962
[]
dl-lengyan
2
clovaai/donut
nlp
275
Idea: Freezing SwinEncoder and fine-tuning BARTdecoder only on custom data
My goal is to be able to finetune on my consumer-grade NVidia RTX GPU which has only 8GB of memory. The Donut architecture has a SwinEncoder followed by a BARTDecoder. I plan to freeze all the layers in SwinEncoder by setting `requires_grad` to False and fine-tune only the BARTDecoder layers. Has anyone tried this approach already? Was it successful for your case?
open
2023-11-27T05:42:20Z
2024-02-07T03:18:24Z
https://github.com/clovaai/donut/issues/275
[]
jackkwok
4
openapi-generators/openapi-python-client
rest-api
626
Max version for dependencies restricts freedom for consuming projects
**Is your feature request related to a problem? Please describe.** Recently httpx had a security vulnerability posted that required an update to httpx==0.23.0. openapi-python-client does not have a lot of dependencies, but httpx was one of them, and httpx was pinned to `<0.23.0` The solution at the time was to bump the limit to `0.24.0` Some concerns with the 0.24.0 approach: 1. That only kicks the problem down the road. What if 0.23.0 also has a security vulnerability? 1. Projects were unable to update for ~10 days after the vulnerabilities were reported I believe a better solution is to remove the max version for all dependencies. This allows clients the freedom to utilize the latest version, and puts less stress on the maintainers of this project to quickly respond to urgent PRs. The concern about untested versions is valid, but a test matrix can help with that. While not a perfectly relevant example, the recommendation in the [python.org documentation](https://packaging.python.org/discussions/install-requires-vs-requirements/#install-requires) reflects my sentiment, notably the `gaining the benefit of dependency upgrades` part: > `It is not considered best practice to use install_requires to pin dependencies to specific versions, or to specify sub-dependencies (i.e. dependencies of your dependencies). This is overly-restrictive, and prevents the user from gaining the benefit of dependency upgrades.` **Describe the solution you'd like** 1. Remove the max version for dependencies in setup.py and pyproject.toml jinja files 1. Consider: Updating the test pipelines to test known good versions, and the latest version (to ensure a dependabot PR for a new version fails if it has regressions) 1. Consider: Adding verbiage to the readme / license specifying known good versions **Describe alternatives you've considered** 1. Make the max version a full major version bump ahead. For example, with httpx: `<1.0.0` - This has the same problem as the proposed solution: - Future versions (e.g. 0.25.0) are permitted but are "untested" for compatibility - It also has the same problem as the current solution: - A future update is blocked by the library (e.g. 1.0.0) 1. Rely on consuming projects to fork the setup.py/pyproject.toml templates (using `--custom-template-path`) to remove the max version restriction. - This is what my project did. - I strongly prefer to NOT fork templates from openapi-python-client unless I have to. Forking puts the onus on my project team to analyze and merge any upstream changes to those templates in the future, just because I needed to change one line. **Additional context** Add any other context or screenshots about the feature request here.
closed
2022-06-07T01:54:47Z
2022-11-12T18:45:26Z
https://github.com/openapi-generators/openapi-python-client/issues/626
[ "✨ enhancement" ]
m3brown
2
Sanster/IOPaint
pytorch
79
U is undefined
I'm getting a u is undefined error when I try and use the brush on an image
closed
2022-10-05T16:28:24Z
2022-10-06T19:37:09Z
https://github.com/Sanster/IOPaint/issues/79
[]
cryptojoejoe
0
voxel51/fiftyone
data-science
5,288
[BUG] Please include all python dependencies for defaults
### Instructions Thank you for submitting an issue. Please refer to our [issue policy](https://www.github.com/voxel51/fiftyone/blob/develop/ISSUE_POLICY.md) for information on what types of issues we address. 1. Please fill in this template to ensure a timely and thorough response 2. Remove the section instructions but leave the section header 3. Place an "x" between the brackets next to an option if it applies. For example: - [x] Selected option 4. **Please delete everything above this line before submitting the issue** ### Describe the problem When I run compute_visualizations with just the default settings it throws a dependency error. By default compute_visualizations uses the umap library to do dimension reduction, yet that library is not installed along with fiftyone. Please either change the default method for dimensionality reduction or add the umap libary to the default packages installed along with fiftyone ### Code to reproduce issue Call compute_visualization() without specifying a method= ### Willingness to contribute The FiftyOne Community encourages bug fix contributions. Would you or another member of your organization be willing to contribute a fix for this bug to the FiftyOne codebase? - [x] Yes. I can contribute a fix for this bug independently - [ ] Yes. I would be willing to contribute a fix for this bug with guidance from the FiftyOne community - [ ] No. I cannot contribute a bug fix at this time
open
2024-12-17T19:35:02Z
2024-12-17T19:35:03Z
https://github.com/voxel51/fiftyone/issues/5288
[ "bug" ]
thesteve0
0
brightmart/text_classification
tensorflow
1
pre-trained word embedding
where to find zhihu-word2vec-title-desc.bin-100?
closed
2017-07-12T14:16:40Z
2017-08-03T03:07:39Z
https://github.com/brightmart/text_classification/issues/1
[]
lc222
1
mlfoundations/open_clip
computer-vision
266
Clarify readme
Readme is too big and mention many unrelated things Let's try and make things simple and move additional information to other MD files Readme should contain only: * Best results and models * How to install * How to do inference * how to do training Details can be moved to dedicated files eg evaluating.md wide-ft.md hf_model.md etc
closed
2022-11-28T16:23:24Z
2023-02-03T23:56:23Z
https://github.com/mlfoundations/open_clip/issues/266
[]
rom1504
5
QuivrHQ/quivr
api
2,709
Demo Linear
Demo Linear
closed
2024-06-24T09:44:19Z
2024-06-24T10:01:14Z
https://github.com/QuivrHQ/quivr/issues/2709
[]
StanGirard
1
K3D-tools/K3D-jupyter
jupyter
50
Improve handling of context loss
While creating lots of test plots in a notebook, I notice that the initial plots behaves strangely. The plot contents and grid is gone, but the numbers and letters on the xyz axes are still there and interaction works. I'm not sure exactly what's needed to reproduce yet, but I'm guessing it's related to webgl context loss.
closed
2017-05-31T13:58:20Z
2020-05-05T01:13:20Z
https://github.com/K3D-tools/K3D-jupyter/issues/50
[]
martinal
5
lanpa/tensorboardX
numpy
289
the graph has many unused node with long type?
(%0 : Float(1, 3, 384, 384) %1 : Float(64, 3, 7, 7) %2 : Float(64) %3 : Float(64) %4 : Float(64) %5 : Float(64) %6 : Long() %7 : Float(64, 64, 1, 1) %8 : Float(64) %9 : Float(64) %10 : Float(64) %11 : Float(64) %12 : Long() ......) %401 : Float(1, 64, 96, 96) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%400, %7), scope: PoseResNet/Sequential[layer1]/Bottleneck[0]/Conv2d[conv1]
closed
2018-11-27T03:32:34Z
2018-12-26T18:17:07Z
https://github.com/lanpa/tensorboardX/issues/289
[]
li-haoran
0
donnemartin/system-design-primer
python
463
Question - Is their a repo as comprehensive as this for Algorithms & Data Structures?
I have searched but have not came across something as comprehensive as this.
open
2020-08-28T14:41:40Z
2020-10-30T01:13:03Z
https://github.com/donnemartin/system-design-primer/issues/463
[ "needs-review" ]
avinashkanaujiya
3
huggingface/datasets
pytorch
6,827
Loading a remote dataset fails in the last release (v2.19.0)
While loading a dataset with multiple splits I get an error saying `Couldn't find file at <URL>` I am loading the dataset like so, nothing out of the ordinary. This dataset needs a token to access it. ``` token="hf_myhftoken-sdhbdsjgkhbd" load_dataset("speechcolab/gigaspeech", "test", cache_dir=f"gigaspeech/test", token=token) ``` I get the following error ![Screenshot 2024-04-19 at 11 03 07 PM](https://github.com/huggingface/datasets/assets/35369637/8dce757f-08ff-45dd-85b5-890fced7c5bc) Now you can see that the URL that it is trying to reach has the JSON object of the dataset split appended to the base URL. I think this may be due to a newly introduced issue. I did not have this issue with the previous version of the datasets. Everything was fine for me yesterday and after the release 12 hours ago, this seems to have broken. Also, the dataset in question runs custom code and I checked and there have been no commits to the dataset on Huggingface in 6 months. ### Steps to reproduce the bug Since this happened with one particular dataset for me, I am listing steps to use that dataset. 1. Open https://huggingface.co/datasets/speechcolab/gigaspeech and fill the form to get access. 2. Create a token on your huggingface account with read access. 3. Run the following line, substituing `<your_token_here>` with your token. ``` load_dataset("speechcolab/gigaspeech", "test", cache_dir=f"gigaspeech/test", token="<your_token_here>") ``` ### Expected behavior Be able to load the dataset in question. ### Environment info datasets == 2.19.0 python == 3.10 kernel == Linux 6.1.58+
open
2024-04-19T21:11:58Z
2024-04-19T21:13:42Z
https://github.com/huggingface/datasets/issues/6827
[]
zrthxn
0
jina-ai/serve
machine-learning
5,606
feat: provide post request level prefetch argument
**Describe the feature** <!-- A clear and concise description of what the feature is. --> The `prefetch` argument is currently used in the Gateway to control the number of in flight requests. This mechanism acts a preliminary back pressure to the executors by not overloading the executors until the `prefetch` or lesser number of requests are completed before continuing with the next request from the request generator/iterator. This single configuration value at the Gateway level is too restrictive because the load handling capability varies for different kinds of requests or workloads. For example an `index` request can have a large payload which requires more time for IO and other CPU intensive tasks. In comparison to a `search` request for look up by id is very lightweight and the operation is mostly offloaded to the underlying database. In this scenario, the executor **could** handle more concurrent `search` requests than indexing requests. Knowing the parameters or behavior of the system, different `prefetch` values can be used to slow down or speed up each operation independently. **Your proposal** <!-- copy past your code/pull request link --> An optional `prefetch` argument is implemented at the client `post` method which can override the Gateway `prefetch` argument. Now the client is initiating requests satisfying the `prefetch` criteria which may prevent overloading the Gateway/Executor with the proper number.
closed
2023-01-17T13:57:20Z
2023-01-19T10:49:13Z
https://github.com/jina-ai/serve/issues/5606
[ "epic/gRPCTransport" ]
girishc13
0
litestar-org/litestar
api
3,793
Enhancement: drop 3.8, since it is EOL since 3.13 release
### Summary Python 3.8 is EOL: https://devguide.python.org/versions/ Dropping 3.8 will allow us to: - [ ] Drop python3.8 examples - [ ] Dropping `List` / `Optional` / etc usage - [ ] Require one less version in CI What do others think? If you agree, I would like to send a PR :) ### Basic Example _No response_ ### Drawbacks and Impact _No response_ ### Unresolved questions _No response_
closed
2024-10-14T10:33:05Z
2025-03-20T15:54:58Z
https://github.com/litestar-org/litestar/issues/3793
[ "Enhancement", "Compatibility", "Package" ]
sobolevn
2
shibing624/text2vec
nlp
106
您好,在使用training_sup_text_matching_model_en.py进行微调的时候遇到的一些问题
### Describe the Question Please provide a clear and concise description of what the question is. 我尝试去使用我自己的数据集,标签只有0和1,我是打算做二分类问题,请问最后的指标是应该怎么算呢,大于0.5的算1,小于0.5的算0吗?因为在您的库中我没有找到与Accuracy或者precision这些指标相关的代码。
closed
2023-07-26T09:13:02Z
2023-08-17T13:15:37Z
https://github.com/shibing624/text2vec/issues/106
[ "question" ]
Fino2020
6
qubvel-org/segmentation_models.pytorch
computer-vision
247
Low GPU utilities using deeplabv3
| 1 TITAN RTX Off | 00000000:65:00.0 Off | N/A | | 55% 76C P2 116W / 280W | 9107MiB / 24212MiB | 43% Default | On average the utility is around 30%, and the iterations are very slow. If using other models, say unet, the utility is over 90%. In both deeplabv3 and unet, the backbone is efficientnet-b4. Seems this problem is specific to deeplabv3. Any suggestion on how to debug it? Thanks. I tried the original code (https://github.com/VainF/DeepLabV3Plus-Pytorch); the utility is over 90%.
closed
2020-08-25T07:14:10Z
2022-02-20T01:54:02Z
https://github.com/qubvel-org/segmentation_models.pytorch/issues/247
[ "Stale" ]
askerlee
4
WZMIAOMIAO/deep-learning-for-image-processing
deep-learning
709
FileNotFound even files does exit
**System information** * Have I written custom code: No * OS Platform: window10 * Python version: 3.8 * Deep learning framework and version: PyTorch 1.7.1 * Use GPU or not: use GPU * The network you trained: Faster R-CNN **Describe the current behavior** ** I am using a custom Pascal VOC dataset. but my files are named in string form and not integers. So, when I am using str format files I'm getting FileNotFoundError but when I change str to int in JPEGImages and 'filename' in the annotations file I can run my code smoothly. What should i change in my program plzz? **
open
2022-12-26T11:45:29Z
2022-12-26T11:45:29Z
https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/709
[]
Shuvo001
0
sunscrapers/djoser
rest-api
263
string index out of range
while using auth/users/create/ .... i am getting string index out of range at the time of sending confirmation and activation email... email backend and setting are all aright and i have not override any function
closed
2018-02-01T13:39:04Z
2019-01-22T12:27:03Z
https://github.com/sunscrapers/djoser/issues/263
[]
sbishnu019
3
allenai/allennlp
nlp
5,302
allennlp.training.callbacks.confidence_checks.ConfidenceCheckError: The NormalizationBiasVerification check failed.
Hi, 1) What should check for this error? and how to set it False in config file? ``` allennlp.training.callbacks.confidence_checks.ConfidenceCheckError: The NormalizationBiasVerification check failed. See logs for more details. You can disable these checks by setting the trainer parameter `run_confidence_checks` to `False`. ``` I re write the `config.jsonnet` as below: ``` "trainer": {... "validation_metric": "+answer_f1", "enable_default_callbacks": false, "use_amp": true, ... "run_confidence_checks": false } ``` 2) and I got error msg: ``` >> allennlp.common.checks.ConfigurationError: Serialization directory (checkpoint) already exists and is not empty. Specify --recover to recover from an existing output folder. ``` Would you give me some advice to solve 1) and 2), and where could I find the related documentation?
closed
2021-07-07T07:09:58Z
2021-07-08T09:28:39Z
https://github.com/allenai/allennlp/issues/5302
[ "question" ]
HenryPaik1
2
plotly/dash-table
plotly
213
Feature Request: Export Data Button
Similar to how you can save plotly graphs to PNGs from a button it would be useful to be able to provide a button that saves the data in a table as a csv or excel file. It would be nice if it took in to account of filtering done either via either the frontend or backend. I don't have any particular opinion on how this should be implemented, e.g. purely JavaScript client side (therefore more magical), or via a callback that the Dash developer must provide an actual file (therefore more customizable).
closed
2018-11-03T17:28:59Z
2019-09-12T15:47:59Z
https://github.com/plotly/dash-table/issues/213
[ "dash-type-enhancement" ]
notatallshaw
2
docarray/docarray
fastapi
1,875
Loading audio tensors fails: ValueError: all input arrays must have the same shape
### Initial Checks - [X] I have read and followed [the docs](https://docs.docarray.org/) and still think this is a bug ### Description I have created subclips of a video in .mp4 using ffmpeg (through moviepy): ```py # moviepy.video.io.ffmpeg_tools.ffmpeg_extract_subclip def ffmpeg_extract_subclip(filename, t1, t2, targetname=None): """ Makes a new video file playing video file ``filename`` between the times ``t1`` and ``t2``. """ name, ext = os.path.splitext(filename) if not targetname: T1, T2 = [int(1000*t) for t in [t1, t2]] targetname = "%sSUB%d_%d.%s" % (name, T1, T2, ext) cmd = [get_setting("FFMPEG_BINARY"),"-y", "-ss", "%0.2f"%t1, "-i", filename, "-t", "%0.2f"%(t2-t1), "-map", "0", "-vcodec", "copy", "-acodec", "copy", targetname] subprocess_call(cmd) ``` Output: <img width="500" alt="image" src="https://github.com/docarray/docarray/assets/20426965/cf828df3-82e7-4eea-9e67-81838a1e8ecc"> The subclip path is passed to `VideoUrl`: ```py subclip = VideoUrl("<subclip_path>") ``` Trying to load the tensors fails: ```py tensors = subclip.load() ``` ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[21], line 1 ----> 1 tensors = subclip.load() File ~/Projects/chrisammon3000/experiments/docarray/docarray-test/.venv/lib/python3.11/site-packages/docarray/typing/url/video_url.py:96, in VideoUrl.load(self, **kwargs) 33 """ 34 Load the data from the url into a `NamedTuple` of 35 [`VideoNdArray`][docarray.typing.VideoNdArray], (...) 93 [`NdArray`][docarray.typing.NdArray] of the key frame indices. 94 """ 95 buffer = self.load_bytes(**kwargs) ---> 96 return buffer.load() File ~/Projects/chrisammon3000/experiments/docarray/docarray-test/.venv/lib/python3.11/site-packages/docarray/typing/bytes/video_bytes.py:86, in VideoBytes.load(self, **kwargs) 84 audio = parse_obj_as(AudioNdArray, np.array(audio_frames)) 85 else: ---> 86 audio = parse_obj_as(AudioNdArray, np.stack(audio_frames)) 88 video = parse_obj_as(VideoNdArray, np.stack(video_frames)) 89 indices = parse_obj_as(NdArray, keyframe_indices) File ~/Projects/chrisammon3000/experiments/docarray/docarray-test/.venv/lib/python3.11/site-packages/numpy/core/shape_base.py:449, in stack(arrays, axis, out, dtype, casting) 447 shapes = {arr.shape for arr in arrays} 448 if len(shapes) != 1: --> 449 raise ValueError('all input arrays must have the same shape') 451 result_ndim = arrays[0].ndim + 1 452 axis = normalize_axis_index(axis, result_ndim) ValueError: all input arrays must have the same shape ``` Stepping through the code shows that the first audio frame has a sample rate of 16: <img width="1000" alt="image" src="https://github.com/docarray/docarray/assets/20426965/26bad5b4-9077-4dd7-be5c-1bc2c6cb653a"> The second and all subsequent frames have 1024 samples: <img width="1246" alt="image" src="https://github.com/docarray/docarray/assets/20426965/83e963b4-bafb-4a28-8fdc-6702f49d7106"> So this results in arrays with different shapes for the audio. What Ive tried: - I have tried adjusting the options for ffmpeg like converting to AAC ad specifying audio channels and it does fix the problem, however it takes about 10 times longer to create the subclips. - Using a preprocessing step to pad the arrays before reading them into DocArray would require reading and writing each subclip again If there is a way to handle the shape mismatch inside DocArray that would be great because it would let me create the subclips and model them as quickly as possible. It would need to be added to this block: https://github.com/docarray/docarray/blob/f71a5e6af58b77fdeb15ba27abd0b7d40b84fd09/docarray/typing/bytes/video_bytes.py#L83-L86 ### Example Code ```Python import os from pathlib import Path import numpy as np from docarray.typing import VideoUrl from moviepy.video.io.ffmpeg_tools import ffmpeg_extract_subclip def generate_subclips(parent_path, video_id, video_uri, video_duration, duration=60): subclips_path = Path(parent_path) / "subclips" subclips_path.mkdir(exist_ok=True) start_times = np.arange(0, video_duration, duration) end_times = np.append(start_times[1:], video_duration) clip_times = list(zip(start_times, end_times)) for start_time, end_time in clip_times: # filename should have start_end seconds as part of the name output_file_path = subclips_path / f"{video_id}__{start_time}_{end_time}.{video_uri.suffix[1:]}" ffmpeg_extract_subclip(video_uri, start_time, end_time, targetname=output_file_path) # Example usage # parent_path = 'path/to/parent/directory' # video_id = 'example_video_id' # video_uri = Path('path/to/video.mp4') # video_duration = 1200 # for example, 20 minutes # generate_subclips(parent_path, video_id, video_uri, video_duration, duration=60) def sort_key(path): """Sorts by the start time in the subclip file name For example: Fu7YkoRWKB8_Y__0_60.mp4 will sort by `0` """ # Extract the integer after "__" from the filename return int(path.stem.split('__')[1].split('_')[0]) subclips_dir = Path(os.getcwd()).parent / "subclips" # create subclips generate_subclips(subclips_dir, <video_id>, <video_uri>, <video_duration>, duration=60) subclips_paths = sorted(subclips_dir.iterdir(), key=sort_key) video_urls = [VideoUrl(f"{str(subclip)}") for subclip in subclips_paths] # load tensors # the first subclip might work... subclip0 = VideoUrl(str(subclips_paths[0])) subclip0_tensors = subclip.load() # but the second and other subclips throw the shape mismatch error subclip1 = VideoUrl(str(subclips_paths[1])) subclip1_tensors = subclip.load() ``` ### Python, DocArray & OS Version ```Text 0.40.0 ``` ### Affected Components - [ ] [Vector Database / Index](https://docs.docarray.org/user_guide/storing/docindex/) - [X] [Representing](https://docs.docarray.org/user_guide/representing/first_step) - [ ] [Sending](https://docs.docarray.org/user_guide/sending/first_step/) - [ ] [storing](https://docs.docarray.org/user_guide/storing/first_step/) - [ ] [multi modal data type](https://docs.docarray.org/data_types/first_steps/)
open
2024-03-06T20:13:45Z
2024-04-04T12:31:11Z
https://github.com/docarray/docarray/issues/1875
[]
chrisammon3000
4
mars-project/mars
scikit-learn
2,717
Supports argument `inclusive` in `date_range` function
Pandas 1.4 now use `inclusive` instead of `closed` in `pd.date_range`. Mars need to adapt to that change.
closed
2022-02-16T08:46:06Z
2022-02-16T22:26:05Z
https://github.com/mars-project/mars/issues/2717
[ "type: feature", "mod: dataframe" ]
wjsi
0
sinaptik-ai/pandas-ai
data-science
1,065
openai should support modify base_url
### 🚀 The feature sometimes we need to use openai with new base_url, for example, original openai url is blocked, or we need a customized openai style api. So we should support modify base_url ### Motivation, pitch sometimes we need to use openai with new base_url, for example, original openai url is blocked, or we need a customized openai style api. So we should support modify base_url ### Alternatives _No response_ ### Additional context _No response_
closed
2024-03-27T12:00:20Z
2024-03-27T17:45:24Z
https://github.com/sinaptik-ai/pandas-ai/issues/1065
[]
cFireworks
0
pallets/flask
flask
5,688
Error using sample code in favicon documentation
The code sample gives the assert `view_func is not None, "expected view func if endpoint is not provided."`. Documentation in question: https://flask.palletsprojects.com/en/stable/patterns/favicon/ Sample code in question: ``` app.add_url_rule('/favicon.ico', redirect_to=url_for('static', filename='favicon.ico')) ```
open
2025-03-02T06:04:51Z
2025-03-02T06:05:43Z
https://github.com/pallets/flask/issues/5688
[]
AluminumAngel
1
aws/aws-sdk-pandas
pandas
2,871
Unable to use copy_from_files to load into a redshift table w/ an identity column.
### Describe the bug In reference to this [issue](https://github.com/aws/aws-sdk-pandas/issues/1110), it appears we are still unable to run copy_from_files when attempting to copy parquet data into a redshift table that has an identity column. It works with to_sql, but not copy_from_files. ### How to Reproduce ``` *P.S. Please do not attach files as it's considered a security risk. Add code snippets directly in the message body as much as possible.* ``` 1. Upload parquet data into S3 2. Create a table with an Identity column for said file 3. run awswrangler.redshift.copy_from_files() to copy the file into the target table. This will return the error: redshift_connector.error.ProgrammingError: {'S': 'ERROR', 'C': '42601', 'M': 'NOT NULL column without DEFAULT must be included in column list', 'F': '../src/pg/src/backend/commands/commands_copy.c', 'L': '2836', 'R': 'DoTheCopy'} ### Expected behavior Expected the copy query to succeed and the resulting table to contain the same data as the parquet file, with the identity column auto-incrementing. ### Your project _No response_ ### Screenshots _No response_ ### OS Linux ### Python version 3.11.7 ### AWS SDK for pandas version 3.8.0 ### Additional context _No response_
closed
2024-06-24T16:27:46Z
2024-07-22T08:01:31Z
https://github.com/aws/aws-sdk-pandas/issues/2871
[ "bug" ]
nlm4145
1
pydantic/FastUI
fastapi
320
computed_field properties not showing in tables
I have a model with some [computed_field](https://docs.pydantic.dev/2.0/usage/computed_fields/)s and they do not show up automatically when I try to render them in a table, unless I explicitly define the `columns` of the table (and reference the field by name with a `DisplayLookup`). The problem/inconvenience with this, is that now I have to specify every column with a DisplayLookup, just to include the one column that is a computed property. This makes my code less DRY and harder to maintain. it would be nice if the table logic would find these computed properties and include them when the columns/fields are discovered.
closed
2024-05-26T17:25:29Z
2024-05-28T17:33:20Z
https://github.com/pydantic/FastUI/issues/320
[]
jimkring
4
vi3k6i5/flashtext
nlp
69
Comparison to `re2` or `hyperscan`
Hi, Both `re2` and `hyperscan` are regex engines tuned for performance. Did you try comparing flashtext to them? Thanks,
open
2018-12-05T20:56:13Z
2018-12-05T20:56:13Z
https://github.com/vi3k6i5/flashtext/issues/69
[]
elazar-lb
0
huggingface/datasets
machine-learning
6,858
Segmentation fault
### Describe the bug Using various version for datasets, I'm no more longer able to load that dataset without a segmentation fault. Several others files are also concerned. ### Steps to reproduce the bug # Create a new venv python3 -m venv venv_test source venv_test/bin/activate # Install the latest version pip install datasets # Load that dataset python3 -q -X faulthandler -c "from datasets import load_dataset; load_dataset('EuropeanParliament/Eurovoc', '1998-09')" ### Expected behavior Data must be loaded ### Environment info datasets==2.19.0 Python 3.11.7 Darwin 22.5.0 Darwin Kernel Version 22.5.0: Mon Apr 24 20:51:50 PDT 2023; root:xnu-8796.121.2~5/RELEASE_X86_64 x86_64
closed
2024-05-02T08:28:49Z
2024-05-03T08:43:21Z
https://github.com/huggingface/datasets/issues/6858
[]
scampion
2
koxudaxi/datamodel-code-generator
pydantic
2,220
Migrate from Poetry to uv package manager
**Description** We should consider migrating our dependency management from Poetry to uv, which is a new extremely fast Python package installer and resolver written in Rust. **Benefits of migration:** - Significantly faster package installation (up to 10-100x faster than pip) - Built-in compile cache for faster repeated installations - Reliable dependency resolution - Native support for all standard Python package formats - Compatible with existing `pyproject.toml` configurations - Lower memory usage compared to Poetry **Required Changes:** 1. Remove Poetry-specific configurations while keeping the essential `pyproject.toml` metadata 2. Update CI/CD pipelines to use uv instead of Poetry 3. Update development setup instructions in documentation 4. Ensure all development scripts/tools are compatible with uv 5. Update the project's virtual environment handling **Notes for Implementation:** - uv can read dependencies directly from `pyproject.toml` - The migration should be backward compatible until fully tested - We should provide clear migration instructions for contributors - Consider adding both Poetry and uv support during a transition period **Resources:** - uv Documentation: https://docs.astral.sh/uv/ - Migration Guide: https://docs.astral.sh/uv/migration/ **Testing Requirements:** - Verify all dependencies are correctly resolved - Ensure development workflow remains smooth - Test CI/CD pipeline with new setup - Verify package installation in clean environments
closed
2024-12-14T06:44:20Z
2025-01-30T18:19:10Z
https://github.com/koxudaxi/datamodel-code-generator/issues/2220
[ "enhancement", "good first issue", "dependencies" ]
koxudaxi
6
davidsandberg/facenet
tensorflow
853
TypeError: unorderable types: NoneType() <= int()
Hi all, While I was training my own data set, all is fine for 6 hours but this error suddenly appear. Epoch: [90][1000/1000] Time 0.516 Loss 0.198 Xent 0.019 RegLoss 0.179 Accuracy 0.989 Lr 0.00050 Cl 0.915 Running forward pass on validation set Validation Epoch: 90 Time 0.000 Loss nan Xent nan Accuracy nan Saving variables Variables saved in 0.11 seconds Saving statistics Traceback (most recent call last): File "src/train_softmax.py", line 580, in <module> main(parse_arguments(sys.argv[1:])) File "src/train_softmax.py", line 234, in main prelogits, prelogits_center_loss, args.random_rotate, args.random_crop, args.random_flip, prelogits_norm, args.prelogits_hist_max, args.use_fixed_image_standardization) File "src/train_softmax.py", line 308, in train if lr<=0: TypeError: unorderable types: NoneType() <= int() Does anyone know what this error is? I used the following parameters: python3 src/train_softmax.py \ --logs_base_dir ~/logs/facenet/ \ --models_base_dir ~/models/facenet/ \ --data_dir ~/facenet/done/ \ --image_size 160 \ --model_def models.inception_resnet_v1 \ --optimizer ADAM --learning_rate -1 \ --max_nrof_epochs 150 \ --keep_probability 0.8 \ --random_crop \ --random_flip \ --use_fixed_image_standardization \ --learning_rate_schedule_file data/learning_rate_schedule_classifier_casia.txt \ --weight_decay 5e-4 \ --embedding_size 512 \ --lfw_distance_metric 1 \ --lfw_use_flipped_images \ --lfw_subtract_mean \ --validation_set_split_ratio 0.05 \ --validate_every_n_epochs 5 \ --prelogits_norm_loss_factor 5e-4
open
2018-08-23T02:15:50Z
2019-07-10T14:29:46Z
https://github.com/davidsandberg/facenet/issues/853
[]
qjqjqjj
4
apache/airflow
automation
47,425
AIP-38 Add Asset Events to Dag Run and Task Instance details
### Body For asset-triggered dag runs, somewhere on the Dag Run page should include the asset events that triggered it. For task instances that produce asset events, somewhere on the TaskInstance page should include the asset events created. Both should use `src/components/AssetEvents` ### Committer - [x] I acknowledge that I am a maintainer/committer of the Apache Airflow project.
closed
2025-03-05T22:28:39Z
2025-03-12T02:08:02Z
https://github.com/apache/airflow/issues/47425
[ "kind:feature", "area:UI", "AIP-38", "area:datasets" ]
bbovenzi
0
remsky/Kokoro-FastAPI
fastapi
61
Apple Silicon support
``` docker run -p 8880:8880 ghcr.io/remsky/kokoro-fastapi-cpu:v0.1.0post1 # CPU WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested <jemalloc>: MADV_DONTNEED does not work (memset will be used instead) <jemalloc>: (This is the expected behaviour if you are running under QEMU) ```
closed
2025-01-15T17:47:21Z
2025-01-24T12:58:59Z
https://github.com/remsky/Kokoro-FastAPI/issues/61
[ "enhancement", "good first issue" ]
awdemos
2
pydata/pandas-datareader
pandas
908
Empty prices from yahoo data reader
Hi, There is a recurrent error with the yahoo data reader, although checking their source page, the information for prices is populated, the yahoo data reader is finding it empty: ``` json_response["context"]["dispatcher"]["stores"]["HistoricalPriceStore"] {'prices': [], 'isPending': False, 'firstTradeDate': -1325583000, 'id': '1d16321104001632196799', 'timeZone': {'gmtOffset': -14400}, 'eventsData': []} ``` Not sure why this is happening since the url is correct. This issue leads to an already known issue `KeyError: "Date"` ocurring in this line: ``` prices["Date"] = to_datetime(to_datetime(prices["Date"], unit="s").dt.date) ```
open
2021-09-20T13:09:34Z
2021-10-10T09:22:03Z
https://github.com/pydata/pandas-datareader/issues/908
[]
pablomerix
1
mkhorasani/Streamlit-Authenticator
streamlit
146
missing 1 required positional argument: 'form_name'
I follow the tutorial at readme file but when I run ``` streamlit run app.py ```` I got the following error: login() missing 1 required positional argument: 'form_name'. I have also created config.yaml file based on the example provided. My entire code is attached below: ``` import yaml from yaml.loader import SafeLoader import streamlit_authenticator as stauth import streamlit as st with open('config.yaml') as file: config = yaml.load(file, Loader=SafeLoader) authenticator = stauth.Authenticate( config['credentials'], config['cookie']['name'], config['cookie']['key'], config['cookie']['expiry_days'], config['preauthorized'] ) authenticator.login() if st.session_state["authentication_status"]: authenticator.logout() st.write(f'Welcome *{st.session_state["name"]}*') st.title('Some content') elif st.session_state["authentication_status"] is False: st.error('Username/password is incorrect') elif st.session_state["authentication_status"] is None: st.warning('Please enter your username and password') ``` 'form_name' is related to 'fields' parameter but there is a default value for it based on the documentation. Have I missed anything? The version of streamlit-authenticator that I use is 0.2.1
closed
2024-03-23T19:58:06Z
2024-03-23T20:18:42Z
https://github.com/mkhorasani/Streamlit-Authenticator/issues/146
[ "help wanted" ]
iliasmachairas
2
modin-project/modin
data-science
6,889
Define `__all__` for `envvars.py`
closed
2024-01-26T17:09:37Z
2024-01-30T09:02:48Z
https://github.com/modin-project/modin/issues/6889
[ "Code Quality 💯" ]
anmyachev
0
ploomber/ploomber
jupyter
1,067
git error in new repository
```sh ploomber examples -n templates/ml-basic -o ml cd ml git init ploomber status ``` Output: ``` Loading pipeline... fatal: bad revision 'HEAD' name Last run Outdated? Product Doc (short) Location -------- ---------------- ------------- ----------------- ---------------- ----------------- get Has not been run Source code File('output/get. Get data /Users/eduardo/De parquet') sktop/test/ml/tas ks.py:6 features Has not been run Source code & File('output/feat Generate new /Users/eduardo/De Upstream ures.parquet') features from sktop/test/ml/tas existing columns ks.py:20 join Has not been run Source code & File('output/join Join raw data /Users/eduardo/De Upstream .parquet') with generated sktop/test/ml/tas features ks.py:29 fit Has not been run Source code & MetaProduct({'mod Train a model /Users/eduardo/De Upstream el': File('output sktop/test/ml/fit /model.pickle'), .py 'nb': File('outpu t/nb.html')}) ``` The `fatal: bad revision 'HEAD'` is due to an error when calling git, looks like isn't causing any trouble (besides the error message), we should check.
closed
2023-01-27T17:24:24Z
2023-01-29T02:39:08Z
https://github.com/ploomber/ploomber/issues/1067
[]
edublancas
2
deepfakes/faceswap
deep-learning
811
GPU usage is less than %20
I trained for 60 hours, Iterations is 1,6000. CPU usage is around 100%,but GPU usage hash been less than %20, This should be abnormal?This seems to be still training with the CPU.How should I troubleshoot where the problem is? I download the file faceswap_setup_x64.exe use gui version, and install Nvidia CUDA cuDNN for my PC. ============ System Information ============ encoding: cp936 git_branch: master git_commits: c3adc93 Update GUI Graph + Stats when model has finished saving. 2610eff Bugfix: GUI: Progress bar on times over 1 hour (extract/convert). c1c60a9 bugfix: Clip output from scaling in convert. 8b2f166 Update helptext for CA Initialization. b6c830c Bugfix: Alignments tool: Correctly set items attribute on Check job gpu_cuda: 10.1 gpu_cudnn: 7.6.2 gpu_devices: GPU_0: GeForce GT 1030 gpu_devices_active: GPU_0 gpu_driver: 430.86 gpu_vram: GPU_0: 2048MB os_machine: AMD64 os_platform: Windows-10-10.0.17134-SP0 os_release: 10 py_command: d:\Users\Administrator\faceswap/faceswap.py gui py_conda_version: conda 4.7.10 py_implementation: CPython py_version: 3.6.8 py_virtual_env: True sys_cores: 8 sys_processor: Intel64 Family 6 Model 60 Stepping 3, GenuineIntel sys_ram: Total: 16320MB, Available: 13362MB, Used: 2958MB, Free: 13362MB =============== Pip Packages =============== absl-py==0.7.1 astor==0.7.1 certifi==2019.6.16 cloudpickle==1.2.1 cycler==0.10.0 cytoolz==0.10.0 dask==2.1.0 decorator==4.4.0 fastcluster==1.1.25 ffmpy==0.2.2 gast==0.2.2 grpcio==1.16.1 h5py==2.9.0 imageio==2.5.0 imageio-ffmpeg==0.3.0 joblib==0.13.2 Keras==2.2.4 Keras-Applications==1.0.8 Keras-Preprocessing==1.1.0 kiwisolver==1.1.0 Markdown==3.1.1 matplotlib==2.2.2 mkl-fft==1.0.12 mkl-random==1.0.2 mkl-service==2.0.2 mock==3.0.5 networkx==2.3 numpy==1.16.2 nvidia-ml-py3==7.352.1 olefile==0.46 pathlib==1.0.1 Pillow==6.1.0 protobuf==3.8.0 psutil==5.6.3 pyparsing==2.4.0 pyreadline==2.1 python-dateutil==2.8.0 pytz==2019.1 PyWavelets==1.0.3 pywin32==223 PyYAML==5.1.1 scikit-image==0.15.0 scikit-learn==0.21.2 scipy==1.2.1 six==1.12.0 tensorboard==1.13.1 tensorflow==1.13.1 tensorflow-estimator==1.13.0 termcolor==1.1.0 toolz==0.10.0 toposort==1.5 tornado==6.0.3 tqdm==4.32.1 Werkzeug==0.15.4 wincertstore==0.2 ============== Conda Packages ============== # packages in environment at C:\Users\Administrator\MiniConda3\envs\faceswap: # # Name Version Build Channel _tflow_select 2.3.0 mkl absl-py 0.7.1 py36_0 astor 0.7.1 py36_0 blas 1.0 mkl ca-certificates 2019.5.15 0 certifi 2019.6.16 py36_0 cloudpickle 1.2.1 py_0 cycler 0.10.0 py36h009560c_0 cytoolz 0.10.0 py36he774522_0 dask-core 2.1.0 py_0 decorator 4.4.0 py36_1 fastcluster 1.1.25 py36h830ac7b_1000 conda-forge ffmpeg 4.1.3 h6538335_0 conda-forge ffmpy 0.2.2 pypi_0 pypi freetype 2.9.1 ha9979f8_1 gast 0.2.2 py36_0 grpcio 1.16.1 py36h351948d_1 h5py 2.9.0 py36h5e291fa_0 hdf5 1.10.4 h7ebc959_0 icc_rt 2019.0.0 h0cc432a_1 icu 58.2 ha66f8fd_1 imageio 2.5.0 py36_0 imageio-ffmpeg 0.3.0 py_0 conda-forge intel-openmp 2019.4 245 joblib 0.13.2 py36_0 jpeg 9c hfa6e2cd_1001 conda-forge keras 2.2.4 0 keras-applications 1.0.8 py_0 keras-base 2.2.4 py36_0 keras-preprocessing 1.1.0 py_1 kiwisolver 1.1.0 py36ha925a31_0 libblas 3.8.0 8_mkl conda-forge libcblas 3.8.0 8_mkl conda-forge liblapack 3.8.0 8_mkl conda-forge liblapacke 3.8.0 8_mkl conda-forge libmklml 2019.0.3 0 libpng 1.6.37 h7602738_0 conda-forge libprotobuf 3.8.0 h7bd577a_0 libtiff 4.0.10 h6512ee2_1003 conda-forge libwebp 1.0.2 hfa6e2cd_2 conda-forge lz4-c 1.8.3 he025d50_1001 conda-forge markdown 3.1.1 py36_0 matplotlib 2.2.2 py36had4c4a9_2 mkl 2019.4 245 mkl-service 2.0.2 py36he774522_0 mkl_fft 1.0.12 py36h14836fe_0 mkl_random 1.0.2 py36h343c172_0 mock 3.0.5 py36_0 networkx 2.3 py_0 numpy 1.16.2 py36h19fb1c0_0 numpy-base 1.16.2 py36hc3f5095_0 nvidia-ml-py3 7.352.1 pypi_0 pypi olefile 0.46 py36_0 opencv 4.1.0 py36hb4945ee_6 conda-forge openssl 1.1.1c he774522_1 pathlib 1.0.1 py36_1 pillow 6.1.0 py36hdc69c19_0 pip 19.1.1 py36_0 protobuf 3.8.0 py36h33f27b4_0 psutil 5.6.3 py36he774522_0 pyparsing 2.4.0 py_0 pyqt 5.9.2 py36h6538335_2 pyreadline 2.1 py36_1 python 3.6.8 h9f7ef89_7 python-dateutil 2.8.0 py36_0 pytz 2019.1 py_0 pywavelets 1.0.3 py36h8c2d366_1 pywin32 223 py36hfa6e2cd_1 pyyaml 5.1.1 py36he774522_0 qt 5.9.7 hc6833c9_1 conda-forge scikit-image 0.15.0 py36ha925a31_0 scikit-learn 0.21.2 py36h6288b17_0 scipy 1.2.1 py36h29ff71c_0 setuptools 41.0.1 py36_0 sip 4.19.8 py36h6538335_0 six 1.12.0 py36_0 sqlite 3.29.0 he774522_0 tensorboard 1.13.1 py36h33f27b4_0 tensorflow 1.13.1 mkl_py36hd212fbe_0 tensorflow-base 1.13.1 mkl_py36hcaf7020_0 tensorflow-estimator 1.13.0 py_0 termcolor 1.1.0 py36_1 tk 8.6.8 hfa6e2cd_0 toolz 0.10.0 py_0 toposort 1.5 py_3 conda-forge tornado 6.0.3 py36he774522_0 tqdm 4.32.1 py_0 vc 14.1 h0510ff6_4 vs2015_runtime 14.15.26706 h3a45250_4 werkzeug 0.15.4 py_0 wheel 0.33.4 py36_0 wincertstore 0.2 py36h7fe50ca_0 xz 5.2.4 h2fa13f4_1001 conda-forge yaml 0.1.7 hc54c509_2 zlib 1.2.11 h2fa13f4_1005 conda-forge zstd 1.4.0 hd8a0e53_0 conda-forge
closed
2019-07-27T05:15:06Z
2019-09-11T03:44:21Z
https://github.com/deepfakes/faceswap/issues/811
[]
kamike
4
pydantic/FastUI
pydantic
272
GoToEvent doesn't refresh the page
I have a landing page with a list of items, and each item has a delete button. Pressing the button pops up a modal that asks for confirmation. If the delete is confirmed, I need the landing page to refresh so that the deleted item is no longer there. I've tried to do this by having the `post` route handler for the delete endpoint return and fire a `GoToEvent`: ```Python [c.FireEvent(event=GoToEvent(url='/landing_page'))] ``` I can change this URL and go to a different endpoint after the delete action, but if I point back to the landing page where the event was triggered, the page doesn't refresh - it still shows the deleted item and the UI is stale. Is there a way to fix this, without having the delete method route to an otherwise pointless intermediate page? Thanks!
closed
2024-04-14T17:49:39Z
2024-04-18T16:07:22Z
https://github.com/pydantic/FastUI/issues/272
[]
charlie-corus
1
tqdm/tqdm
pandas
661
Reopening a completed/finished/closed bar
When I re-create tq object to reset the progress bar, it creates a new one. I would like to re-use the existing one in the console. Can you please let me know how I can reset the existing progress bar?
open
2019-01-21T15:15:47Z
2019-01-26T18:59:54Z
https://github.com/tqdm/tqdm/issues/661
[ "p4-enhancement-future 🧨" ]
akaniklaus
15
viewflow/viewflow
django
267
_wrapper() missing 2 required positional arguments: 'flow_class' and 'flow_task'
I use django viewflow passing variables to function based view. But when i use decorator @flow_start_view on view , there is an error :_wrapper() missing 2 required positional arguments: 'flow_class' and 'flow_task'.
closed
2020-03-29T09:26:42Z
2020-05-12T03:10:57Z
https://github.com/viewflow/viewflow/issues/267
[ "request/question", "dev/flow" ]
zgpnuaa
2
microsoft/Bringing-Old-Photos-Back-to-Life
pytorch
12
Absolute path not found
I setup everything successfully using pip3 and I run this with a test image in the `test_image` folder on my desktop. ``` python run.py --input_folder /Users/g14a/Desktop/test_image/ --output_folder /Users/g14a/Desktop/test_output/ --GPU 0 --with_scratch ``` and then I get this error log. ``` Running Stage 1: Overall restoration Traceback (most recent call last): File "detection.py", line 9, in <module> import torch ImportError: No module named torch Traceback (most recent call last): File "test.py", line 6, in <module> from torch.autograd import Variable ImportError: No module named torch.autograd Traceback (most recent call last): File "run.py", line 79, in <module> for x in os.listdir(stage_1_results): OSError: [Errno 2] No such file or directory: '/Users/g14a/Desktop/test_output/stage_1_restore_output/restored_image' ``` Is there anything I'm missing here?
closed
2020-09-25T14:27:40Z
2020-09-28T09:37:16Z
https://github.com/microsoft/Bringing-Old-Photos-Back-to-Life/issues/12
[]
g14a
3
vanna-ai/vanna
data-visualization
30
Migrate tests to this repo
Our tests are on the server repo -- they need to be migrated to this repo
closed
2023-07-24T12:15:51Z
2023-07-24T17:57:14Z
https://github.com/vanna-ai/vanna/issues/30
[ "internal" ]
zainhoda
0
sqlalchemy/sqlalchemy
sqlalchemy
10,321
ensure all modules in testing etc. can be imported without any configuration
### Discussed in https://github.com/sqlalchemy/sqlalchemy/discussions/10320
closed
2023-09-07T21:34:17Z
2023-09-08T12:54:27Z
https://github.com/sqlalchemy/sqlalchemy/issues/10321
[ "bug", "setup", "near-term release" ]
zzzeek
1
google-research/bert
nlp
759
Exporting probabilities over the learned vocabulary
Currently, the `extract_features.py` file supports extracting representations before the last output layer but I want to extract the final probabilities over the vocabulary. I modified the code with the following addition but it doesn't seem to work (the probabilities I'm getting are all very small). ``` model_output = model.get_sequence_output() batch_size = tf.shape(model_output)[0] length = tf.shape(model_output)[1] hidden_size = tf.shape(model_output)[2] model_output = tf.reshape(inputs, [-1, hidden_size]) logits = tf.matmul(model_output, model.get_embedding_table(), transpose_b=True) probs = tf.nn.softmax(logits, axis=-1) probs = tf.reshape(probs, [batch_size, length, bert_config.vocab_size]) ```
closed
2019-07-12T18:44:11Z
2019-07-16T15:32:33Z
https://github.com/google-research/bert/issues/759
[]
lioutasb
0
DistrictDataLabs/yellowbrick
scikit-learn
419
Add alpha transparency to RadViz
To make the RadViz a bit easier to read, we can add optional transparency, set by the user to be able to distinguish regions of more or less density.
closed
2018-05-14T21:49:40Z
2018-05-29T21:36:43Z
https://github.com/DistrictDataLabs/yellowbrick/issues/419
[ "type: feature", "level: novice", "pycon2019" ]
bbengfort
1
holoviz/panel
plotly
6,863
Legend Position of Holoviews Layout Affects Sizing of Other Plot(s) in GridSpec Layout
### Description: When using a `GridSpec` layout to create a grid of plots in Panel, the position of the legend in one plot can affect the sizing of the other plots above in the Grid. This issue occurs when the legend is positioned outside of the plot (e.g., 'right'). However, when the legend is positioned within the plot (e.g., 'bottom_right'), the issue does not occur. ![gridSpec_issue](https://github.com/holoviz/panel/assets/2987662/2c363b96-af7a-4503-b4e8-322783cd4e0f) ### Code to reproduce: ```python import panel as pn import holoviews as hv import numpy as np hv.extension('bokeh') # Create 3 overlay plots with legends plots = [hv.Curve(np.random.rand(10), label='Curve 1') * hv.Curve(np.random.rand(10), label='Curve 2') for _ in range(3)] # Create a 2x2 grid grid = pn.GridSpec(max_height=3000) # Add the plots to the grid grid[0, 0] = pn.panel(plots[0].opts(show_legend=False)) grid[0, 1] = pn.panel(plots[1].opts(show_legend=False)) grid[1, :] = pn.panel(plots[2].options(show_legend=True,width=800,legend_position='right')) # Create a 2x2 grid grid1 = pn.GridSpec(max_height=3000) # Add the plots to the grid grid1[0, 0] = pn.panel(plots[0].opts(show_legend=False)) grid1[0, 1] = pn.panel(plots[1].opts(show_legend=False)) grid1[1, :] = pn.panel(plots[2].options(show_legend=True,width=800,legend_position='bottom_right')) pn.Row(grid,grid1).servable() ``` ### Environment: Python version: 3.8.10 Panel version: 1.2.0 HoloViews version: 1.17.1
open
2024-05-24T09:40:10Z
2024-05-24T09:40:10Z
https://github.com/holoviz/panel/issues/6863
[]
matsvandecavey
0
keras-team/keras
deep-learning
20,918
Backend not switching dynamically between tensorflow and torch
As the title says, using keras 3.8 on python 3.10: ```python from keras.src.utils import backend_utils from keras.src import backend dynamic = backend_utils.DynamicBackend () dynamic.set_backend('tensorflow') print (backend.backend()) dynamic.set_backend('torch') print (backend.backend()) ``` ```bash tensorflow tensorflow ```
open
2025-02-17T23:11:18Z
2025-02-18T18:13:14Z
https://github.com/keras-team/keras/issues/20918
[ "type:Bug" ]
jobs-git
2
ageitgey/face_recognition
machine-learning
895
Question: More image encodings means more accuray?
Hi, I tried to build face recognition system using openCv but this lib looks more promising. I saw an API to extract face encodings of an image which I can use to store it somewhere. **My question is**, if I store more face encodings of **person A** (lets say 5 image encodings) and then I recognize **person A** by providing stored 5 face encodings. => Does it mean that more face-encodings of a person result in more accuray Or this lib can work with just 1 image encoding? Regards,
open
2019-07-30T17:09:42Z
2020-09-28T19:51:10Z
https://github.com/ageitgey/face_recognition/issues/895
[]
sskhokhar
3
slackapi/python-slack-sdk
asyncio
1,116
Slack api posting messages both in thread and channel
### Reproducible in: #### The Slack SDK version ``` slack-sdk @ file:///root/.cache/pypoetry/artifacts/27/53/c9/ca63502b551d096a9302635cefcd8e6f65d780f2e43bd81aa24bfcd027/slack_sdk-3.11.0-py2.py3-none-any.whl ``` #### Python runtime version `Python 3.9.7` #### OS info `python:3.9-slim-buster` #### Steps to reproduce: 1. Post a new message to a thread with ``` try: _message = await self._bot.chat_postMessage( channel=reply_message.chat.chat, text=body, thread_ts=reply_message.replies_to.message_id, ) return str(_message["ts"]) except SlackApiError as e: logging.error(e, exc_info=True) ``` 2. Get following response from slack ``` [2021-09-16 10:41:09 +0526] Sending a request - url: POST https://www.slack.com/api/chat.postMessage, params: {}, files: {}, data: {}, json: {'channel': 'G01KLTD4VND', 'text': '*Stanislav Mitrofanov* likes this', 'thread_ts': '1631788537.013100'}, proxy: {}, headers: {'Content-Type': 'application/json;charset=utf-8', 'Authorization': '(redacted)', 'User-Agent': 'Python/3.9.7 slackclient/3.11.0 Linux/4.15.0-111-generic'} [2021-09-16 10:41:09 +0878] Received the following response - status: 200, headers: {'Date': 'Thu, 16 Sep 2021 10:41:09 GMT', 'Server': 'Apache', 'x-xss-protection': '0', 'Pragma': 'no-cache', 'Cache-Control': 'private, no-cache, no-store, must-revalidate', 'Access-Control-Allow-Origin': '*', 'strict-transport-security': 'max-age=31536000; includeSubDomains; preload', 'x-slack-req-id': 'f8d99e473b2cbeda278d1ee8e96d981b', 'x-content-type-options': 'nosniff', 'referrer-policy': 'no-referrer', 'Access-Control-Expose-Headers': 'x-slack-req-id, retry-after', 'x-slack-backend': 'r', 'x-oauth-scopes': 'channels:join,chat:write,app_mentions:read,reactions:read,reactions:write,users:read.email,users:read,channels:read,groups:read,mpim:read,im:read,commands,mpim:write,im:write', 'x-accepted-oauth-scopes': 'chat:write', 'Expires': 'Mon, 26 Jul 1997 05:00:00 GMT', 'Vary': 'Accept-Encoding', 'Access-Control-Allow-Headers': 'slack-route, x-slack-version-ts, x-b3-traceid, x-b3-spanid, x-b3-parentspanid, x-b3-sampled, x-b3-flags', 'Content-Encoding': 'gzip', 'Content-Length': '381', 'Content-Type': 'application/json; charset=utf-8', 'x-envoy-upstream-service-time': '83', 'x-backend': 'main_normal main_bedrock_normal_with_overflow main_canary_with_overflow main_bedrock_canary_with_overflow main_control_with_overflow main_bedrock_control_with_overflow', 'x-server': 'slack-www-hhvm-main-iad-pj8e', 'x-via': 'envoy-www-iad-0snq, haproxy-edge-fra-1npe', 'x-slack-shared-secret-outcome': 'shared-secret', 'Via': 'envoy-www-iad-0snq'}, body: {'ok': True, 'channel': 'G01KLTD4VND', 'ts': '1631788869.013700', 'message': {'bot_id': 'B027UFGEYG6', 'type': 'message', 'text': '*Stanislav Mitrofanov* likes this', 'user': 'U027Y7E67M3', 'ts': '1631788869.013700', 'team': 'T29V7H6CS', 'bot_profile': {'id': 'B027UFGEYG6', 'deleted': False, 'name': 'vcs-bot', 'updated': 1626676701, 'app_id': 'A0287DU94S0', 'icons': {'image_36': 'https://avatars.slack-edge.com/2021-07-14/2259131742759_a51520f8f27104d55dc8_36.png', 'image_48': 'https://avatars.slack-edge.com/2021-07-14/2259131742759_a51520f8f27104d55dc8_48.png', 'image_72': 'https://avatars.slack-edge.com/2021-07-14/2259131742759_a51520f8f27104d55dc8_72.png'}, 'team_id': 'T29V7H6CS'}, 'thread_ts': '1631788537.013100', 'parent_user_id': 'U027Y7E67M3'}} ``` ### Expected result: Message posted only thread ### Actual result: Message posted both in thread and channel with same id(timestamp), any reactions applies to both messages
closed
2021-09-16T11:07:49Z
2025-03-04T12:46:14Z
https://github.com/slackapi/python-slack-sdk/issues/1116
[ "question", "needs info", "web-client", "Version: 3x", "auto-triage-stale" ]
StanislavMitrofanov
8
Kinto/kinto
api
2,687
HTTP 500 on get permissions (ValueError)
**Steps to reproduce** _docker run -p 8888:8888 kinto/kinto-server_ Running kinto 14.0.1.dev0. Request ``` GET /v1/permissions?_since=6148&_token= HTTP/1.1 Host: 127.0.0.1:8888 ``` Response ``` { "code": 500, "errno": 999, "error": "Internal Server Error", "message": "A programmatic error occured, developers have been informed.", "info": "https://github.com/Kinto/kinto/issues/" } ``` Log: ``` "GET /v1/permissions?_since=6148&_token=" ? (? ms) not enough values to unpack (expected 3, got 2) errno=999 File "/app/kinto/core/events.py", line 157, in tween File "/usr/local/lib/python3.7/site-packages/pyramid/router.py", line 148, in handle_request registry, request, context, context_iface, view_name File "/usr/local/lib/python3.7/site-packages/pyramid/view.py", line 683, in _call_view response = view_callable(context, request) File "/usr/local/lib/python3.7/site-packages/pyramid/config/views.py", line 169, in __call__ return view(context, request) File "/usr/local/lib/python3.7/site-packages/pyramid/config/views.py", line 188, in attr_view File "/usr/local/lib/python3.7/site-packages/pyramid/config/views.py", line 214, in predicate_wrapper File "/usr/local/lib/python3.7/site-packages/pyramid/viewderivers.py", line 325, in secured_view File "/usr/local/lib/python3.7/site-packages/pyramid/viewderivers.py", line 436, in rendered_view result = view(context, request) File "/usr/local/lib/python3.7/site-packages/pyramid/viewderivers.py", line 144, in _requestonly_view response = view(request) File "/usr/local/lib/python3.7/site-packages/cornice/service.py", line 590, in wrapper response = view_() File "/app/kinto/core/resource/__init__.py", line 350, in plural_get return self._plural_get(False) File "/app/kinto/core/resource/__init__.py", line 393, in _plural_get include_deleted=include_deleted, File "/app/kinto/views/permissions.py", line 77, in get_objects parent_id=parent_id, File "/app/kinto/views/permissions.py", line 109, in _get_objects perms_by_object_uri = backend.get_accessible_objects(principals) File "/app/kinto/core/decorators.py", line 45, in decorated result = method(self, *args, **kwargs) File "/app/kinto/core/permission/memory.py", line 101, in get_accessible_objects _, object_id, permission = key.split(":", 2) ValueError: not enough values to unpack (expected 3, got 2) "GET /v1/permissions?_since=6148&_token=" 500 (4 ms) agent=python-requests/2.24.0 authn_type=account errno=999 time=2020-12-21T11:49:07.494000 uid=admin ```
closed
2020-12-21T12:21:17Z
2025-02-12T08:10:30Z
https://github.com/Kinto/kinto/issues/2687
[ "bug", "easy-pick", "stale" ]
AlexB1986
16
PaddlePaddle/ERNIE
nlp
724
Ernie 3.0 Progressive Pretraining Schedules
Loved the Ernie 3.0 paper! I have a question about the “progressive pretraining” settings: In section 3.3.1 on page 6, it says: > We propose to progressively, smoothly, simultaneously increase: input sequence length, batch size, learning rate, dropout rate. In section 5, how do the smaller models get to the same loss as the large model in only 10 hours on 8 v100s? > As shown in Tab. 11, we record the time for the loss value of the model converges to the same as that of the ERNIE 3.0. In section 5, it seems that only batch size and sequence length are scheduled. And “dropout keeps 0 in the progressive warmup stage”. Is this correct? To summarize, my 2 questions are: 1) Which variables should be increased during training? 2) How many steps (and with what schedules) are the models in section 5 trained for? Thanks in advance and let me know if I should ask somewhere else!
closed
2021-08-02T16:03:02Z
2022-01-10T10:25:52Z
https://github.com/PaddlePaddle/ERNIE/issues/724
[ "wontfix" ]
sshleifer
4
flairNLP/flair
pytorch
3,095
[Feature]: Add Dual Encoder
### Problem statement This paper describes a dual encoder approach for few-shot domain adaption ([link](https://arxiv.org/abs/2203.08985)). ### Solution Implement and test approach. ### Additional Context _No response_
closed
2023-02-08T14:26:56Z
2023-04-27T11:55:32Z
https://github.com/flairNLP/flair/issues/3095
[ "feature" ]
whoisjones
0
albumentations-team/albumentations
deep-learning
1,852
AttributeError: 'Compose' object has no attribute 'strict'
## Describe the bug ``` ... File "/home/xxx/.local/lib/python3.12/site-packages/albumentations/core/composition.py", line 299, in __call__ self.preprocess(data) File "/home/xxx/.local/lib/python3.12/site-packages/albumentations/core/composition.py", line 322, in preprocess if self.strict: ^^^^^^^^^^^ AttributeError: 'Compose' object has no attribute 'strict' ``` python 3.12 Ubuntu 24.04
closed
2024-07-23T06:13:09Z
2024-08-15T23:22:27Z
https://github.com/albumentations-team/albumentations/issues/1852
[ "bug" ]
MichaelMonashev
5
ymcui/Chinese-LLaMA-Alpaca
nlp
401
继续指令微调Alpaca生产内容重复
### 详细描述问题 你好,我使用医疗领域的指令在alpaca-plus的基础上继续进行指令微调,发现模型总是重复生产相同的内容,具体见如下截图。请问是什么问题。 ### 运行截图或日志 ![image](https://github.com/ymcui/Chinese-LLaMA-Alpaca/assets/38728769/403b13bf-a8b9-4fbd-aa0c-c0acffa5cbb2) ### 必查项目(前三项只保留你要问的) - [x] **基础模型**:Alpaca-Plus - [x] **运行系统**:Linux - [x] **问题分类**:模型训练与精调 /效果问题 - [x] (必选)由于相关依赖频繁更新,请确保按照[Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki)中的相关步骤执行 - [x] (必选)我已阅读[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案 - [x] (必选)第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)、[LlamaChat](https://github.com/alexrozanski/LlamaChat)等,同时建议到对应的项目中查找解决方案
closed
2023-05-21T13:24:57Z
2023-06-05T22:02:27Z
https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/401
[ "stale" ]
DendiHust
21
allenai/allennlp
data-science
5,044
Allennlp unnecessarily downloading huggingface models during evaluation
Hi! I trained a model with BERT (base) on MNLI, using the config in `allennlp-models`. I offhandedly noticed this, and I'm not sure if it's a bug. When I moved the output of the allennlp run to another machine (without a huggingface or allennlp cache) and ran `allennlp evaluate` with it, I noticed that downloads were happening. I've put the full evaluation log below, but it seems like these are the BERT base files (tokenizer, vocabulary, model checkpoints) being downloaded by huggingface (just comparing to my training logs and what I'd expect given what I know about the model). I'd imagine this is probably partially an issue with AllenNLP and partially an issue with huggingface, but figured I'd mention it here so it's documented at least. Anyway, this is probably pretty low-priority, but it seems like in this case, the model is instantiated with the pre-trained BERT weights (which have to be downloaded from the internet), and then immediately overwritten with what's stored in `model.tar.gz`. I wonder if there's some way to defer this download in the `evaluate` / `predict` case, since it really doesn't seem like they should be necessary (i.e., I'd expect inference to work fully offline). Thanks! ``` [nltk_data] Downloading package punkt to [nltk_data] /0xa1bf5d3e4bb248fd99bb5a28b6029041/nltk_data... [nltk_data] Unzipping tokenizers/punkt.zip. [nltk_data] Downloading package wordnet to [nltk_data] /0xa1bf5d3e4bb248fd99bb5a28b6029041/nltk_data... [nltk_data] Unzipping corpora/wordnet.zip. 2021-03-08 04:42:41,199 - INFO - allennlp.common.plugins - Plugin allennlp_models available 2021-03-08 04:42:41,202 - INFO - allennlp.models.archival - loading archive file model_bundle/sampled_hps_05c6993db950b757ac8a02f9516055b8_fixed/model.tar.gz 2021-03-08 04:42:41,203 - INFO - allennlp.models.archival - extracting archive file model_bundle/sampled_hps_05c6993db950b757ac8a02f9516055b8_fixed/model.tar.gz to temp dir /tmp/tmphow9s0vy Downloading: 0%| | 0.00/433 [00:00<?, ?B/s] Downloading: 100%|██████████| 433/433 [00:00<00:00, 202kB/s] Downloading: 0%| | 0.00/213k [00:00<?, ?B/s] <snip> Downloading: 100%|██████████| 213k/213k [00:00<00:00, 1.03MB/s] Downloading: 0%| | 0.00/436k [00:00<?, ?B/s] <snip> Downloading: 100%|██████████| 436k/436k [00:00<00:00, 1.34MB/s] 2021-03-08 04:42:48,189 - INFO - allennlp.data.vocabulary - Loading token dictionary from /tmp/tmphow9s0vy/vocabulary. Downloading: 0%| | 0.00/436M [00:00<?, ?B/s] Downloading: 0%| | 1.49M/436M [00:00<00:29, 14.9MB/s] <snip> Downloading: 100%|██████████| 436M/436M [00:34<00:00, 12.6MB/s] 2021-03-08 04:43:29,667 - INFO - allennlp.models.archival - removing temporary unarchived model dir at /tmp/tmphow9s0vy 2021-03-08 04:43:29,716 - INFO - allennlp.common.checks - Pytorch version: 1.7.1+cu101 2021-03-08 04:43:29,717 - INFO - allennlp.commands.evaluate - Reading evaluation data from test.json loading instances: 0it [00:00, ?it/s] loading instances: 2742it [00:02, 1370.67it/s] loading instances: 5484it [00:04, 1366.55it/s] loading instances: 8218it [00:06, 1365.18it/s] loading instances: 9824it [00:07, 1379.82it/s] 2021-03-08 04:43:37,200 - INFO - allennlp.training.util - Iterating over dataset 0it [00:00, ?it/s] 2021-03-08 04:43:37,209 - INFO - allennlp.data.samplers.bucket_batch_sampler - No sorting keys given; trying to guess a good one 2021-03-08 04:43:37,209 - INFO - allennlp.data.samplers.bucket_batch_sampler - Using ['tokens'] as the sorting keys accuracy: 0.73, loss: 1.07 ||: : 114it [00:02, 56.76it/s] accuracy: 0.73, loss: 1.09 ||: : 232it [00:04, 58.02it/s] accuracy: 0.72, loss: 1.08 ||: : 349it [00:06, 57.70it/s] accuracy: 0.72, loss: 1.08 ||: : 465it [00:08, 57.55it/s] accuracy: 0.73, loss: 1.07 ||: : 584it [00:10, 58.09it/s] accuracy: 0.73, loss: 1.05 ||: : 703it [00:12, 58.47it/s] accuracy: 0.73, loss: 1.04 ||: : 821it [00:14, 58.60it/s] accuracy: 0.73, loss: 1.04 ||: : 942it [00:16, 59.07it/s] accuracy: 0.73, loss: 1.03 ||: : 1061it [00:18, 58.47it/s] accuracy: 0.73, loss: 1.03 ||: : 1180it [00:20, 58.75it/s] accuracy: 0.73, loss: 1.02 ||: : 1228it [00:21, 58.38it/s] 2021-03-08 04:43:58,234 - INFO - allennlp.common.util - Metrics: { "accuracy": 0.7325936482084691, "loss": 1.0198601739944098 } 2021-03-08 04:43:58,234 - INFO - allennlp.commands.evaluate - Finished evaluating. ```
closed
2021-03-08T07:01:25Z
2021-04-17T04:49:46Z
https://github.com/allenai/allennlp/issues/5044
[ "bug", "Contributions welcome" ]
nelson-liu
7
junyanz/pytorch-CycleGAN-and-pix2pix
computer-vision
1,067
Store image in specific folder
Can anyone point out the part of code where results are being stored? I don't want to create folders within a folder as cycle gan is currently making 2 more folders
open
2020-06-11T11:49:10Z
2020-06-14T22:23:36Z
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1067
[]
ghost
1
NullArray/AutoSploit
automation
1,131
Unhandled Exception (efb0bb411)
Autosploit version: `3.1.2` OS information: `Linux-4.19.0-kali4-amd64-x86_64-with-Kali-kali-rolling-kali-rolling` Running context: `autosploit.py -s -q **************` Error mesage: `'matches'` Error traceback: ``` Traceback (most recent call): File "/root/Desktop/AutoSploit/autosploit/main.py", line 109, in main AutoSploitParser().single_run_args(opts, loaded_tokens, loaded_exploits) File "/root/Desktop/AutoSploit/lib/cmdline/cmd.py", line 203, in single_run_args save_mode=search_save_mode File "/root/Desktop/AutoSploit/api_calls/shodan.py", line 45, in search raise AutoSploitAPIConnectionError(str(e)) errors: 'matches' ``` Metasploit launched: `False`
closed
2019-07-07T22:08:40Z
2019-07-24T10:51:04Z
https://github.com/NullArray/AutoSploit/issues/1131
[]
AutosploitReporter
0
521xueweihan/HelloGitHub
python
2,351
wiliwili
## 推荐项目 <!-- 这里是 HelloGitHub 月刊推荐项目的入口,欢迎自荐和推荐开源项目,唯一要求:请按照下面的提示介绍项目。--> <!-- 点击上方 “Preview” 立刻查看提交的内容 --> <!--仅收录 GitHub 上的开源项目,请填写 GitHub 的项目地址--> - 项目地址:https://github.com/xfangfang/wiliwili <!--请从中选择(C、C#、C++、CSS、Go、Java、JS、Kotlin、Objective-C、PHP、Python、Ruby、Rust、Swift、其它、书籍、机器学习)--> - 类别:C++ <!--请用 20 个左右的字描述它是做什么的,类似文章标题让人一目了然 --> - 项目标题:一个第三方 Nintendo Switch B 站客户端。 <!--这是个什么项目、能用来干什么、有什么特点或解决了什么痛点,适用于什么场景、能够让初学者学到什么。长度 32-256 字符--> - 项目描述:wiliwili 是一个用 C++ 编写的 Switch B站客户端,拥有非常接近官方 PC 客户端的 B 站浏览体验,同时支持触屏与手柄按键操控,让你的 Switch 瞬间变身机顶盒与掌上平板。 <!--令人眼前一亮的点是什么?类比同类型项目有什么特点!--> - 亮点:高度还原了 B 站官方 PC 客户端。 - 示例代码:(可选) - 截图:![](https://raw.githubusercontent.com/xfangfang/wiliwili/yoga/docs/images/screenshot-2.png) - 后续更新计划:
closed
2022-09-04T01:03:17Z
2022-09-28T01:02:01Z
https://github.com/521xueweihan/HelloGitHub/issues/2351
[ "已发布", "C++ 项目" ]
ChungZH
0
nonebot/nonebot2
fastapi
2,996
Plugin: lolinfo
### PyPI 项目名 nonebot-plugin-lolinfo ### 插件 import 包名 nonebot_plugin_lolinfo ### 标签 [{"label":"LOL","color":"#02ceff"},{"label":"英雄联盟","color":"#ff02fb"}] ### 插件配置项 _No response_
closed
2024-10-02T12:00:46Z
2024-10-05T11:01:24Z
https://github.com/nonebot/nonebot2/issues/2996
[ "Plugin" ]
Shadow403
5
replicate/cog
tensorflow
1,319
suppress verbose status percentage logs
hey, is there a way to customize the status response to contain just the current percentage instead of the (too highly) verbose historic percentage (and the bars)? Example: Return current percentage: ``` XX% ``` NOT THIS: ``` Hide logs Seed: 1057727382 0%| | 0/40 [00:00<?, ?it/s] 2%|▎ | 1/40 [00:00<00:05, 7.06it/s] 5%|▌ | 2/40 [00:00<00:05, 6.97it/s] 8%|▊ | 3/40 [00:00<00:05, 6.95it/s] 10%|█ | 4/40 [00:00<00:05, 6.94it/s] 12%|█▎ | 5/40 [00:00<00:05, 6.93it/s] 15%|█▌ | 6/40 [00:00<00:04, 6.93it/s] 18%|█▊ | 7/40 [00:01<00:04, 6.91it/s] 20%|██ | 8/40 [00:01<00:04, 6.90it/s] 22%|██▎ | 9/40 [00:01<00:04, 6.89it/s] 25%|██▌ | 10/40 [00:01<00:04, 6.89it/s] 28%|██▊ | 11/40 [00:01<00:04, 6.89it/s] 30%|███ | 12/40 [00:01<00:04, 6.89it/s] 32%|███▎ | 13/40 [00:01<00:03, 6.88it/s] 35%|███▌ | 14/40 [00:02<00:03, 6.89it/s] 38%|███▊ | 15/40 [00:02<00:03, 6.88it/s] 40%|████ | 16/40 [00:02<00:03, 6.88it/s] 42%|████▎ | 17/40 [00:02<00:03, 6.89it/s] 45%|████▌ | 18/40 [00:02<00:03, 6.89it/s] 48%|████▊ | 19/40 [00:02<00:03, 6.88it/s] 50%|█████ | 20/40 [00:02<00:02, 6.87it/s] 52%|█████▎ | 21/40 [00:03<00:02, 6.86it/s] 55%|█████▌ | 22/40 [00:03<00:02, 6.87it/s] 57%|█████▊ | 23/40 [00:03<00:02, 6.86it/s] 60%|██████ | 24/40 [00:03<00:02, 6.86it/s] 62%|██████▎ | 25/40 [00:03<00:02, 6.86it/s] 65%|██████▌ | 26/40 [00:03<00:02, 6.83it/s] 68%|██████▊ | 27/40 [00:03<00:01, 6.83it/s] 70%|███████ | 28/40 [00:04<00:01, 6.84it/s] 72%|███████▎ | 29/40 [00:04<00:01, 6.85it/s] 75%|███████▌ | 30/40 [00:04<00:01, 6.85it/s] 78%|███████▊ | 31/40 [00:04<00:01, 6.86it/s] 80%|████████ | 32/40 [00:04<00:01, 6.87it/s] 82%|████████▎ | 33/40 [00:04<00:01, 6.86it/s] 85%|████████▌ | 34/40 [00:04<00:00, 6.86it/s] 88%|████████▊ | 35/40 [00:05<00:00, 6.86it/s] 90%|█████████ | 36/40 [00:05<00:00, 6.86it/s] 92%|█████████▎| 37/40 [00:05<00:00, 6.85it/s] 95%|█████████▌| 38/40 [00:05<00:00, 6.85it/s] 98%|█████████▊| 39/40 [00:05<00:00, 6.85it/s] 100%|██████████| 40/40 [00:05<00:00, 6.87it/s] 100%|██████████| 40/40 [00:05<00:00, 6.88it/s] ```
open
2023-09-26T01:48:33Z
2023-11-22T11:02:53Z
https://github.com/replicate/cog/issues/1319
[]
yosun
3
Kav-K/GPTDiscord
asyncio
444
not sending answer when i used internet search [BUG]
when i used internet search command it show these output An error occured while performing search: [Errno 2] No such file or directory: '/tmp/llama_index'
closed
2023-12-09T21:50:10Z
2023-12-31T10:16:36Z
https://github.com/Kav-K/GPTDiscord/issues/444
[ "bug" ]
nitin0909
2
Avaiga/taipy
automation
2,476
Retrieve the filtered data after a filter being applied
### Description The user can filter a table using built-in filters. The goal would be to get the filtered data when these filters are applied. Like that, the application can know the filters in place to display them in charts elsewhere or other info. ### Solution Proposed Having a callback or a bound variable to get the filtered data. @FabienLelaquais says it is not suitable to send the data from the front-end to the back-end. The one thing we can do is add an `on_filter` on the table and send a description of the applied filter(s) for the back-end to leverage. ### Acceptance Criteria - [ ] If applicable, a new demo code is provided to show the new feature in action. - [ ] Integration tests exhibiting how the functionality works are added. - [ ] Any new code is covered by a unit tested. - [ ] Check code coverage is at least 90%. - [ ] Related issue(s) in taipy-doc are created for documentation and Release Notes are updated. ### Code of Conduct - [x] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+). - [ ] I am willing to work on this issue (optional)
open
2025-03-04T14:26:46Z
2025-03-21T13:37:59Z
https://github.com/Avaiga/taipy/issues/2476
[ "🖰 GUI", "🟧 Priority: High", "✨New feature" ]
FlorianJacta
0
litestar-org/polyfactory
pydantic
642
Bug: ModelFactory doesn't support pydantic EmailStr Field constraints
### Description hi! /ᐠ. ̫ .ᐟ\ฅ A pydantic model with a `EmailStr` field raises `polyfactory.exceptions.ParameterException: received constraints for unsupported type <class 'pydantic.networks.EmailStr'>` if used with `Field` constraints. related: https://github.com/litestar-org/polyfactory/discussions/616 ### URL to code causing the issue _No response_ ### MCVE ```python from polyfactory.factories.pydantic_factory import ModelFactory from pydantic import BaseModel, EmailStr, Field class Email(BaseModel): email: EmailStr = Field(max_length=20) class EmailFactory(ModelFactory[Email]): ... def test_factory(): email = EmailFactory.build() test_factory() ``` ### Steps to reproduce ```bash 1. Copy and paste MCVE into a python file 2. Assuming pydantic and polyfactory are installed, execute the file 3. See error ``` ### Screenshots "In the format of: `![SCREENSHOT_DESCRIPTION](SCREENSHOT_LINK.png)`" ### Logs ```bash ``` ### Release Version 2.19.0 ### Platform - [x] Linux - [ ] Mac - [ ] Windows - [ ] Other (Please specify in the description above)
open
2025-02-07T20:31:58Z
2025-02-07T20:31:58Z
https://github.com/litestar-org/polyfactory/issues/642
[ "bug" ]
suspiciousRaccoon
0
Avaiga/taipy
automation
1,914
Prevent taipy gui builder from being registered over and over again on reload
### Description If taipy gui builder has been registered, dont do it again ### Acceptance Criteria - [ ] Ensure new code is unit tested, and check code coverage is at least 90%. - [ ] Propagate any change on the demos and run all of them to ensure there is no breaking change. - [ ] Ensure any change is well documented. ### Code of Conduct - [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+). - [X] I am willing to work on this issue (optional)
closed
2024-10-04T09:13:06Z
2024-10-04T09:33:48Z
https://github.com/Avaiga/taipy/issues/1914
[ "📈 Improvement", "🖰 GUI", "🟧 Priority: High", "Gui: Back-End" ]
dinhlongviolin1
0
tqdm/tqdm
pandas
1,256
Tqdm Zip()
Tqdm Zip() When I run tqdm with a zip function it gives an output like 10000it [00:00, 2294226.01it/s] but if there is no zip function 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 100/100 [00:00<00:00, 2671531.21it/s] I think having the same progress bar as the no zip function will be really useful because in it I can see the max num and I can see a progress bar I think this would be really useful. Just an idea. With best Regards.
closed
2021-10-08T04:58:53Z
2021-10-14T14:17:32Z
https://github.com/tqdm/tqdm/issues/1256
[ "question/docs ‽" ]
Programmer-RD-AI
3
vitalik/django-ninja
rest-api
1,233
Pass serialization context (similar to validation context) with request to `model_dump`
**Is your feature request related to a problem? Please describe.** We're trying to post-process response entities to erase data from responses based on user permissions. We're using custom framework to do this (somewhat similar to ninja permissions, but applied at field level). Based on model class and user permissions it generates list of fields to exclude to use with `model_dump`. ``` excludes = get_excludes_by_permissions(api_response_schema, request.user) json_response = api_response_schema.model_dump_json(exclude=excludes) ``` **Describe the solution you'd like** Pydantic starting from 2.7 added serialization context support (https://github.com/pydantic/pydantic/pull/8965) Ninja already passes similar context to validation. `Operation._result_to_response` https://github.com/vitalik/django-ninja/blob/master/ninja/operation.py#L258-L269 It would be useful to have request in serialization context as well.
closed
2024-07-17T12:24:35Z
2024-08-12T12:06:31Z
https://github.com/vitalik/django-ninja/issues/1233
[]
scorpp
3
graphql-python/graphene
graphql
1,288
How can I avoid repeated code in my mutations?
I find myself making this mutation often: ```gql mutation { createMember(input: {name: "new name", email: "email@email.com", password: "123"}) { member { id } errors { field message } } } ``` Similarly, I like to have an `errors` list returned in all my other mutations to make sure the mutation was performed correctly. This can be easily done for a single mutation by having a `errors = graphene.List( graphene.NonNull(commons.Error), description='List of errors that occurred executing the mutation.', )` attribute in my schema. But as my schema grows larger the amount of repeated code I have for these mutations increases. Is there a way I can somehow abstract this `errors` attribute to a `BaseMutation` class and then inherit my mutations from that instead?
closed
2020-11-27T18:01:34Z
2024-06-22T10:09:14Z
https://github.com/graphql-python/graphene/issues/1288
[ "🐛 bug" ]
IgnisDa
1
e2b-dev/code-interpreter
jupyter
34
Support ES6 modules in JS runtime and top-level await
The JavaScript runtime is currently based on [ijavascript kernel runtime](https://github.com/n-riesco/ijavascript). ijavascript kernel doesn't support `import`. Ideally, we should fix it and `import` should just work. Less ideally (and the "less" is big here), we need to put a disclaimer here for users. Similarly with `await`. The `await` doesn't work in the top most scope. Eg running this code ```js const fs = require('node:fs'); const fetch = require('node-fetch'); console.log('Hello'); const url = 'https://jsonplaceholder.typicode.com/posts/1'; // Fetch data from the API const response = await fetch(url); const data = await response.text(); console.log(data); ``` will produce the following error ``` ExecutionError { name: 'SyntaxError', value: 'await is only valid in async functions and the top level bodies of modules', tracebackRaw: [ 'evalmachine.<anonymous>:10', 'const response = await fetch(url);', ' ^^^^^', '', 'SyntaxError: await is only valid in async functions and the top level bodies of modules', ' at new Script (node:vm:94:7)', ' at createScript (node:vm:250:10)', ' at Object.runInThisContext (node:vm:298:10)', ' at run ([eval]:1020:15)', ' at onRunRequest ([eval]:864:18)', ' at onMessage ([eval]:828:13)', ' at process.emit (node:events:517:28)', ' at emit (node:internal/child_process:944:14)', ' at process.processTicksAndRejections (node:internal/process/task_queues:83:21)' ] } ``` Deno's Jupyter kernel would tick both boxes (`import` and top-level `await`): https://blog.jupyter.org/bringing-modern-javascript-to-the-jupyter-notebook-fc998095081e We'd need to check if NPM dependencies work out of the box for users
open
2024-08-08T19:05:37Z
2025-03-23T22:54:31Z
https://github.com/e2b-dev/code-interpreter/issues/34
[ "bug", "improvement" ]
mlejva
3
gradio-app/gradio
python
10,105
The message format of examples of multimodal chatbot is different from that of normal submission
### Describe the bug When you click the example image inside the Chatbot component of the following app ```py import gradio as gr def run(message, history): print(message) return "aaa" demo = gr.ChatInterface( fn=run, examples=[ [ { "text": "Describe the image.", "files": ["cats.jpg"], }, ], ], multimodal=True, type="messages", cache_examples=False, ) demo.launch() ``` ![](https://github.com/user-attachments/assets/0c4037da-297e-4c47-b9ec-9502f2ba36e1) the printed message format looks like this: ``` {'text': 'Describe the image.', 'files': [{'path': '/tmp/gradio/4766eb361fb2233afe48adb8f799f04eee25d8f2eb32fd4a835d27f777e0dee6/cats.jpg', 'url': 'https://hysts-debug-multimodal-chat-examples.hf.space/gradio_api/file=/tmp/gradio/4766eb361fb2233afe48adb8f799f04eee25d8f2eb32fd4a835d27f777e0dee6/cats.jpg', 'size': None, 'orig_name': 'cats.jpg', 'mime_type': 'image/jpeg', 'is_stream': False, 'meta': {'_type': 'gradio.FileData'}}]} ``` But when you submit the same input from the textbox component in the bottom, it looks like this: ``` {'text': 'Describe the image.', 'files': ['/tmp/gradio/4766eb361fb2233afe48adb8f799f04eee25d8f2eb32fd4a835d27f777e0dee6/cats.jpg']} ``` This inconsistency is problematic. I think the latter is the correct and expected format. ### Have you searched existing issues? 🔎 - [X] I have searched and found no existing issues ### Reproduction https://huggingface.co/spaces/hysts-debug/multimodal-chat-examples ### Screenshot _No response_ ### Logs _No response_ ### System Info ```shell gradio==5.7.1 ``` ### Severity I can work around it
closed
2024-12-03T08:35:43Z
2024-12-07T15:51:01Z
https://github.com/gradio-app/gradio/issues/10105
[ "bug" ]
hysts
0
horovod/horovod
deep-learning
3,801
Unable to use GPU on 2nd machine
Hi I have setup horovod on a k8s cluster with 2 GPU nodes using spark-operator. I have executed the mnist example using tensorflow, and it was executed successfully on both nodes (utlilizing GPUs on both nodes). However when I am using KerasEstimator on spark, the training executes successfully but I think that only one gpu is getting used. I am following this example: https://docs.databricks.com/_static/notebooks/deep-learning/horovod-spark-estimator-keras.html here are the logs: [1,0]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-1:246:259 [0] NCCL INFO Bootstrap : Using eth0:10.84.52.31<0> [1,0]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-1:246:259 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation [1,0]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-1:246:259 [0] NCCL INFO NET/IB : No device found. [1,0]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-1:246:259 [0] NCCL INFO NET/Socket : Using [0]eth0:10.84.52.31<0> [1,0]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-1:246:259 [0] NCCL INFO Using network Socket [1,0]<stdout>:NCCL version 2.11.4+cuda11.4 [1,1]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-2:1240:1253 [0] NCCL INFO Bootstrap : Using eth0:10.84.179.52<0> [1,1]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-2:1240:1253 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation [1,1]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-2:1240:1253 [0] NCCL INFO NET/IB : No device found. [1,1]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-2:1240:1253 [0] NCCL INFO NET/Socket : Using [0]eth0:10.84.179.52<0> [1,1]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-2:1240:1253 [0] NCCL INFO Using network Socket [1,1]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-2:1240:1253 [0] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 [1,1]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-2:1240:1253 [0] NCCL INFO Setting affinity for GPU 0 to 55555555,55555555 [1,0]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-1:246:259 [0] NCCL INFO Channel 00/02 : 0 1 [1,0]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-1:246:259 [0] NCCL INFO Channel 01/02 : 0 1 [1,0]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-1:246:259 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] -1/-1/-1->0->1 [1,0]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-1:246:259 [0] NCCL INFO Setting affinity for GPU 0 to 55555555,55555555 [1,1]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-2:1240:1253 [0] NCCL INFO Channel 00 : 0[2000] -> 1[4000] [receive] via NET/Socket/0 [1,1]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-2:1240:1253 [0] NCCL INFO Channel 01 : 0[2000] -> 1[4000] [receive] via NET/Socket/0 [1,1]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-2:1240:1253 [0] NCCL INFO Channel 00 : 1[4000] -> 0[2000] [send] via NET/Socket/0 [1,1]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-2:1240:1253 [0] NCCL INFO Channel 01 : 1[4000] -> 0[2000] [send] via NET/Socket/0 [1,0]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-1:246:259 [0] NCCL INFO Channel 00 : 1[4000] -> 0[2000] [receive] via NET/Socket/0 [1,0]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-1:246:259 [0] NCCL INFO Channel 01 : 1[4000] -> 0[2000] [receive] via NET/Socket/0 [1,0]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-1:246:259 [0] NCCL INFO Channel 00 : 0[2000] -> 1[4000] [send] via NET/Socket/0 [1,0]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-1:246:259 [0] NCCL INFO Channel 01 : 0[2000] -> 1[4000] [send] via NET/Socket/0 [1,0]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-1:246:259 [0] NCCL INFO Connected all rings [1,0]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-1:246:259 [0] NCCL INFO Connected all trees [1,0]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-1:246:259 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 [1,0]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-1:246:259 [0] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer [1,0]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-1:246:259 [0] NCCL INFO comm 0x7fd2247488e0 rank 0 nranks 2 cudaDev 0 busId 2000 - Init COMPLETE [1,0]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-1:246:259 [0] NCCL INFO Launch mode Parallel [1,1]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-2:1240:1253 [0] NCCL INFO Connected all rings [1,1]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-2:1240:1253 [0] NCCL INFO Connected all trees [1,1]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-2:1240:1253 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 [1,1]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-2:1240:1253 [0] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer [1,1]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-2:1240:1253 [0] NCCL INFO comm 0x7fad647478a0 rank 1 nranks 2 cudaDev 0 busId 4000 - Init COMPLETE [1,0]<stdout>: [1,1]<stderr>:WARNING:tensorflow:Callback method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0086s vs `on_train_batch_end` time: 0.0658s). Check your callbacks. [1,0]<stderr>:WARNING:tensorflow:Callback method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0053s vs `on_train_batch_end` time: 0.0687s). Check your callbacks. 1/4851 [..............................] - ETA: 8:35:39 - loss: 1.0356 - accuracy: 0.4844[1,0]<stdout>:  9/4851 [..............................] - ETA: 30s - loss: 0.9629 - accuracy: 0.4219 [1,0]<stdout>: 17/4851 [..............................] - ETA: 31s - loss: 0.9131 - accuracy: 0.4265[1,0]<stdout>: 24/4851 [..............................] - ETA: 33s - loss: 0.8747 - accuracy: 0.4421[1,0]<stdout>: 31/4851 [..............................] - ETA: 34s - loss: 0.8364 - accuracy: 0.4768[1,0]<stdout>: 39/4851 [..............................] - ETA: 34s - loss: 0.7905 - accuracy: 0.5445[1,0]<stdout>: 48/4851 [..............................] - ETA: 32s - loss: 0.7389 - accuracy: 0.6286[1,0]<stdout>: 56/4851 [..............................] - ETA: 32s - loss: 0.6957 - accuracy: 0.6816[1,0]<stdout>: 64/4851 [..............................] - ETA: 32s - loss: 0.6540 - accuracy: 0.7214[1,0]<stdout>: 71/4851 [..............................] - ETA: 32s - loss: 0.6205 - accuracy: 0.7489[1,0]<stdout>: 79/4851 [..............................] - ETA: 32s - loss: 0.5844 - accuracy: 0.7743[1,0]<stdout>: 87/4851 [..............................] - ETA: 32s - loss: 0.5504 - accuracy: 0.7951[1,0]<stdout>: 95/4851 [..............................] - ETA: 32s - loss: 0.5194 - accuracy: 0.8123[1,0]<stdout>: 103/4851 [..............................] - ETA: 32s - loss: 0.4912 - accuracy: 0.8269[1,0]<stdout>: 112/4851 [..............................] - ETA: 31s - loss: 0.4623 - accuracy: 0.8408[1,0]<stdout>: 121/4851 [..............................] - ETA: 31s - loss: 0.4364 - accuracy: 0.8525[1,0]<stdout>: 131/4851 [..............................] - ETA: 30s - loss: 0.4106 - accuracy: 0.8637[1,0]<stdout>: 140/4851 [..............................] - ETA: 30s - loss: 0.3886 - accuracy: 0.8724[1,0]<stdout>: 148/4851 [..............................] - ETA: 30s - loss: 0.3706 - accuracy: 0.8793[1,0]<stdout>: 156/4851 [..............................] - ETA: 30s - loss: 0.3542 - accuracy: 0.8855[1,0]<stdout>: 164/4851 [>.............................] - ETA: 30s - loss: 0.3388 - accuracy: 0.8911[1,0]<stdout>: 172/4851 [>.............................] - ETA: 30s - loss: 0.3246 - accuracy: 0.8962[1,0]<stdout>: 180/4851 [>.............................] - ETA: 30s - loss: 0.3116 - accuracy: 0.9008[1,0]<stdout>: 188/4851 [>.............................] - ETA: 30s - loss: 0.2994 - accuracy: 0.9050[1,0]<stdout>: 196/4851 [>.............................] - ETA: 30s - loss: 0.2882 - accuracy: 0.9089[1,0]<stdout>: 204/4851 [>.............................] - ETA: 30s - loss: 0.2778 - accuracy: 0.9125[1,0]<stdout>: 212/4851 [>.............................] - ETA: 30s - loss: 0.2680 - accuracy: 0.9158[1,0]<stdout>: 220/4851 [>.............................] - ETA: 30s - loss: 0.2588 - accuracy: 0.9188[1,0]<stdout>: 227/4851 [>.............................] - ETA: 30s - loss: 0.2513 - accuracy: 0.9213[1,0]<stdout>: 235/4851 [>.............................] - ETA: 30s - loss: 0.2432 - accuracy: 0.9240[1,0]<stdout>: 243/4851 [>.............................] - ETA: 30s - loss: 0.2356 - accuracy: 0.9265[1,0]<stdout>: 251/4851 [>.............................] - ETA: 30s - loss: 0.2285 - accuracy: 0.9288[1,0]<stdout>: 259/4851 [>.............................] - ETA: 30s - loss: 0.2218 - accuracy: 0.9310[1,0]<stdout>: 267/4851 [>.............................] - ETA: 30s - loss: 0.2155 - accuracy: 0.9331[1,0]<stdout>: 275/4851 [>.............................] - ETA: 30s - loss: 0.2095 - accuracy: 0.9351[1,0]<stdout>: 283/4851 [>.............................] - ETA: 30s - loss: 0.2038 - accuracy: 0.9369[1,0]<stdout>: 291/4851 [>.............................] - ETA: 30s - loss: 0.1985 - accuracy: 0.9386[1,0]<stdout>: 299/4851 [>.............................] - ETA: 30s - loss: 0.1933 - accuracy: 0.9403[1,0]<stdout>: 307/4851 [>.............................] - ETA: 30s - loss: 0.1885 - accuracy: 0.9418[1,0]<stdout>: 316/4851 [>.............................] - ETA: 30s - loss: 0.1833 - accuracy: 0.9435[1,0]<stdout>: 325/4851 [=>............................] - ETA: 30s - loss: 0.1784 - accuracy: 0.9450[1,0]<stdout>: 334/4851 [=>............................] - ETA: 30s - loss: 0.1738 - accuracy: 0.9465[1,0]<stdout>: 343/4851 [=>............................] - ETA: 30s - loss: 0.1694 - accuracy: 0.9479[1,0]<stdout>: 351/4851 [=>............................] - ETA: 29s - loss: 0.1656 - accuracy: 0.9491[1,0]<stdout>: 358/4851 [=>............................] - ETA: 30s - loss: 0.1625 - accuracy: 0.9501[1,0]<stdout>: 366/4851 [=>............................] - ETA: 29s - loss: 0.1590 - accuracy: 0.9512[1,0]<stdout>: 374/4851 [=>............................] - ETA: 29s - loss: 0.1557 - accuracy: 0.9522[1,0]<stdout>: 383/4851 [=>............................] - ETA: 29s - loss: 0.1521 - accuracy: 0.9534[1,0]<stdout>: 391/4851 [=>............................] - ETA: 29s - loss: 0.1491 - accuracy: 0.9543[1,0]<stdout>: 400/4851 [=>............................] - ETA: 29s - loss: 0.1458 - accuracy: 0.9554[1,0]<stdout>: 408/4851 [=>............................] - ETA: 29s - loss: 0.1430 - accuracy: 0.9562[1,0]<stdout>: 417/4851 [=>............................] - ETA: 29s - loss: 0.1400 - accuracy: 0.9572[1,0]<stdout>: 422/4851 [=>............................] - ETA: 29s - loss: 0.1384 - accuracy: 0.9577[1,0]<stdout>: 428/4851 [=>............................] - ETA: 29s - loss: 0.1365 - accuracy: 0.9583[1,0]<stdout>: 437/4851 [=>............................] - ETA: 29s - loss: 0.1338 - accuracy: 0.9591[1,0]<stdout>: 447/4851 [=>............................] - ETA: 29s - loss: 0.1314 - accuracy: 0.9600[1,0]<stdout>: 456/4851 [=>............................] - ETA: 29s - loss: 0.1289 - accuracy: 0.9608[1,0]<stdout>: 465/4851 [=>............................] - ETA: 29s - loss: 0.1264 - accuracy: 0.9616[1,0]<stdout>: 474/4851 [=>............................] - ETA: 29s - loss: 0.1241 - accuracy: 0.9623[1,0]<stdout>: 483/4851 [=>............................] - ETA: 29s - loss: 0.1218 - accuracy: 0.9630[1,0]<stdout>: 491/4851 [==>...........................] - ETA: 28s - loss: 0.1199 - accuracy: 0.9636[1,0]<stdout>: 499/4851 [==>...........................] - ETA: 28s - loss: 0.1180 - accuracy: 0.9642[1,0]<stdout>: 508/4851 [==>...........................] - ETA: 28s - loss: 0.1160 - accuracy: 0.9648[1,0]<stdout>: 518/4851 [==>...........................] - ETA: 28s - loss: 0.1138 - accuracy: 0.9655[1,0]<stdout>: 527/4851 [==>...........................] - ETA: 28s - loss: 0.1118 - accuracy: 0.9661[1,0]<stdout>: 536/4851 [==>...........................] - ETA: 28s - loss: 0.1100 - accuracy: 0.9667[1,0]<stdout>: 545/4851 [==>...........................] - ETA: 28s - loss: 0.1082 - accuracy: 0.9672[1,0]<stdout>: 554/4851 [==>...........................] - ETA: 28s - loss: 0.1065 - accuracy: 0.9677[1,0]<stdout>: 562/4851 [==>...........................] - ETA: 28s - loss: 0.1050 - accuracy: 0.9682[1,0]<stdout>: 572/4851 [==>...........................] - ETA: 27s - loss: 0.1032 - accuracy: 0.9688[1,0]<stdout>:
open
2022-12-30T17:14:54Z
2023-02-13T10:58:48Z
https://github.com/horovod/horovod/issues/3801
[ "bug", "spark" ]
obaid1922
0
man-group/arctic
pandas
27
No handlers could be found for logger "arctic.store.version_store"
So when I run this code I get the following warning. Do you know what might be causing that? > > > store.initialize_library('NASDAQ') > > > No handlers could be found for logger "arctic.store.version_store"
closed
2015-09-01T16:59:25Z
2019-01-23T22:33:49Z
https://github.com/man-group/arctic/issues/27
[ "question" ]
mattdornfeld
3
nolar/kopf
asyncio
1,024
Only add finalizers for specific sub-types of resources
### Keywords kopf finalizers filter ### Problem **Context** We are using cert-manager to generate certificates. We have several issuer systems for certificates. When a certificate is removed, we want to use a kopf.on.delete() to call these systems to perform some actions. All external certificates need these calls while the certificates from the local cert-manager do not need external calls. **Problem** When removing a cluster, all applications and their certificates are deleted. This works great as with each deleted certificate, kopf.on.delete() is called and our handler is triggered. However, for one specific type of certificates -the ones from cert-manager itself- no finalizer should be added. Cert-manager is removed as the last application, after our kopf-operator (due to sync-waves). But at that point the finalizer cannot be called and the deletion of the certificates is pending forever. **Attempts** We have tried to change the finalizers list of specific certificate resources at on.create(), on.update(), on.resume() and on.event() but Kopf puts them back as soon as it notices the finalizer has been removed. We have thought about going for an on.event() call instead of on.delete(), thus making the call non-blocking ('optional=True'). But that could lead to a resource that has already been deleted being fed to the operator, which then cannot determine what actions to take. This is gambling, which we don't like in our production systems. **Wishes** Can we somehow turn off finalizers from specific types of resources? In our case: certificates that have spec.issuerRef.name="some-issuer" should not have finalizers, while certificates that have spec.issuerRef.name="other-issuer" do need the finalizer. This feature does not seem to be part of the documentation nor has it been posed as a question here on GitHub yet, as far as I can see. Hope to hear from you! Cheers, Eelko
open
2023-05-01T13:59:57Z
2023-07-24T07:51:58Z
https://github.com/nolar/kopf/issues/1024
[ "question" ]
eelkoniovb
2
s3rius/FastAPI-template
graphql
186
alembic upgrade head failed: ModuleNotFoundError: No module named 'pydantic_settings'
I've read issues here related to alembic migrations, but looks like the failure message I got is different, so I'd make a post for help. In my .env file, I added my url as following (suppose my app has the prefix `MYAPP` with a remote postgresql server ip: 120.120.120.120): `MYAPP_sqlalchemy.url=postgresql://user1:pw1@120.120.120.120/myapp_db` Then in my local computer's terminal, when I execute the following command: ``` alembic upgrade head ``` I got errors that complain `pydantic_settings` not found: ``` File "/Users/user1/myapp/myapp/db/migrations/env.py", line 10, in <module> from myapp.settings import settings File "/Users/user1/myapp/./myapp/settings.py", line 6, in <module> from pydantic_settings import BaseSettings, SettingsConfigDict ModuleNotFoundError: No module named 'pydantic_settings' ``` Looks like a path issue for loading pydantic_settings package. anyone has seen similar issues?
closed
2023-08-12T04:27:49Z
2023-08-12T20:18:55Z
https://github.com/s3rius/FastAPI-template/issues/186
[]
rcholic
2
sinaptik-ai/pandas-ai
data-visualization
1,379
Passing and receiving response headers to and from the language model
### 🚀 The feature Can you pass extra headers to the large language model and extract the headers in the response? ### Motivation, pitch I have AzureOpenAI end points in different regions and my use case requests that some of my data should not leave the country of origin. It's super easy to do that with AzureOpenAI chat completions, by passing in the target endpoint for each query, but I can't find the equivalent functionality with PandasAI Agents. Moreover, I don't not have any way of verifying which endpoint gave me my response, because I either get a string, dataframe or graph in the response. I know it's possible because PandasAI sits above the language models which can accept and return extra response headers. I need to make a conversational data agent and PandasAI is ticking all the boxes - except that security one. Is this already possible and I missed it in the documentation somewhere? ### Alternatives _No response_ ### Additional context _No response_
closed
2024-09-26T23:07:32Z
2025-01-02T16:09:06Z
https://github.com/sinaptik-ai/pandas-ai/issues/1379
[ "enhancement" ]
cmapund
1
pallets/flask
python
4,413
The flask service is unstable after startup
<!-- This issue tracker is a tool to address bugs in Flask itself. Please use Pallets Discord or Stack Overflow for questions about your own code. Replace this comment with a clear outline of what the bug is. --> <!-- Describe how to replicate the bug. Include a minimal reproducible example that demonstrates the bug. Include the full traceback if there was an exception. --> <!-- Describe the expected behavior that should have happened but didn't. --> Environment: - Python version: python3.8 - Flask version: 2.0.2 Error: When I use the following command, the service is automatically shut down at regular intervals,My flask is installed on CONDA ```shell conda activate work nohup python start.py & ``` The service is running well. One day, it suddenly fails to receive a request and needs to be restarted. but I don't know how to start. I'm mainly responsible for Java services. I'm responsible for operation and maintenance, so I don't know much about flask
closed
2022-01-12T02:45:59Z
2022-01-27T00:03:43Z
https://github.com/pallets/flask/issues/4413
[]
Jzow
1
home-assistant/core
asyncio
141,224
MQTT device trackers no longer on map in 2025.3.3
### The problem This is a continuation of the frontend issue: https://github.com/home-assistant/frontend/issues/24710 Basically, it seems like HA version 2025.3.3 no longer reads the position of an MQTT device tracker from it's attributes and as a result they don't show up on the map: ![Image](https://github.com/user-attachments/assets/ede9535a-fc3d-4730-be67-80f491bb82fd) The strange thing is that it started working again for a few hours but quickly went back to the broken state you see above. ### What version of Home Assistant Core has the issue? 2025.3.3 ### What was the last working version of Home Assistant Core? 2025.2.5, maybe 2025.3.2 or 2025.3.1 ### What type of installation are you running? Home Assistant OS ### Integration causing the issue MQTT ### Link to integration documentation on our website https://www.home-assistant.io/integrations/mqtt/ ### Diagnostics information _No response_ ### Example YAML snippet ```yaml ``` ### Anything in the logs that might be useful for us? ```txt Uncaught (in promise) Error: Invalid LatLng object: (unknown, unknown) j LatLng.js:32 H LatLng.js:123 initialize Marker.js:112 i Class.js:24 s decorated_marker.ts:12 value ha-map.ts:556 value ha-map.ts:145 performUpdate reactive-element.ts:1333 scheduleUpdate reactive-element.ts:1262 _$Ej reactive-element.ts:1237 requestUpdate reactive-element.ts:1215 set reactive-element.ts:731 value ha-map.ts:235 value ha-map.ts:116 connectedCallback scoped-custom-element-registry.js:248 k lit-html.ts:1411 $ lit-html.ts:1454 g lit-html.ts:1563 _$AI lit-html.ts:1384 W lit-html.ts:2183 update lit-element.ts:166 performUpdate reactive-element.ts:1333 scheduleUpdate reactive-element.ts:1262 _$Ej reactive-element.ts:1237 requestUpdate reactive-element.ts:1215 set reactive-element.ts:731 value hui-map-card.ts:132 d create-element-base.ts:219 promise callback*40249/d/< create-element-base.ts:214 d create-element-base.ts:308 l create-element-base.ts:242 v create-card-element.ts:107 value hui-card.ts:123 value hui-card.ts:42 value hui-section.ts:71 _cards hui-section.ts:290 value hui-section.ts:289 value hui-section.ts:199 value hui-section.ts:99 performUpdate reactive-element.ts:1328 scheduleUpdate reactive-element.ts:1262 LatLng.js:32:8 ``` ### Additional information _No response_
closed
2025-03-23T16:28:22Z
2025-03-23T21:36:42Z
https://github.com/home-assistant/core/issues/141224
[ "needs-more-information", "integration: mqtt" ]
Cyberes
3
WeblateOrg/weblate
django
14,130
Do not split digest emails, summarize instead
### Describe the problem This morning I woke up to 79 emails from Weblate in my inbox. All sent within the span of two minutes, all notifying about a single project (Organic Maps). I checked my preferences. They correctly say that I want to receive digests, not one email per message. However, it seems like even though I have set my preferences like this, if there are more than so many notifications of the same type (a 100?), the notification system will split them into multiple digests and send them as multiple emails. This seems rather suboptimal. ### Describe the solution you would like Instead of sending dozens of emails, send just one, but add a note mentioning that "there are more changes than we have included here". Getting dozens of emails about new strings/suggestions/etc. isn't any more helpful than getting just one email. It's counter-productive even. ### Describe alternatives you have considered _No response_ ### Screenshots _No response_ ### Additional context _No response_
open
2025-03-07T13:17:37Z
2025-03-13T09:14:58Z
https://github.com/WeblateOrg/weblate/issues/14130
[ "enhancement", "hacktoberfest", "help wanted", "good first issue", "Area: Notifcations" ]
rimas-kudelis
4
jwkvam/bowtie
jupyter
243
test restoring state for all components
not sure how easy this will be, but it will add assurance that refreshing hopefully works or at the very least doesn't break the app. ## Basic test structure: 1. Build app 2. Use selenium to interact with widget. 3. Refresh web page 4. Make sure the page loaded successfully. 5. Extra credit: make sure the state is same as it was before refresh. ## widgets tested - [ ] dates - [ ] dropdown etc.
open
2018-09-22T06:14:55Z
2018-09-22T16:10:57Z
https://github.com/jwkvam/bowtie/issues/243
[ "reliability" ]
jwkvam
0
neuml/txtai
nlp
172
Add index archive support
Currently, embeddings indexes are saved and loaded to directories. Add methods to support reading and writing indexes from/to archive files. Support following formats: - tar.bz2 - tar.gz - tar.xz - zip This method will store compressed indexes. Compressed indexes can be used directly and/or as a backup strategy.
closed
2021-12-14T00:33:20Z
2021-12-18T14:46:59Z
https://github.com/neuml/txtai/issues/172
[]
davidmezzetti
0
huggingface/datasets
nlp
7,147
IterableDataset strange deadlock
### Describe the bug ``` import datasets import torch.utils.data num_shards = 1024 def gen(shards): for shard in shards: if shard < 25: yield {"shard": shard} def main(): dataset = datasets.IterableDataset.from_generator( gen, gen_kwargs={"shards": list(range(num_shards))}, ) dataset = dataset.shuffle(buffer_size=1) dataset = datasets.interleave_datasets( [dataset, dataset], probabilities=[1, 0], stopping_strategy="all_exhausted" ) dataset = dataset.shuffle(buffer_size=1) dataloader = torch.utils.data.DataLoader( dataset, batch_size=8, num_workers=8, ) for i, batch in enumerate(dataloader): print(batch) if i >= 10: break print() if __name__ == "__main__": for _ in range(100): main() ``` ### Steps to reproduce the bug Running the script above, at some point it will freeze. - Changing `num_shards` from 1024 to 25 avoids the issue - Commenting out the final shuffle avoids the issue - Commenting out the interleave_datasets call avoids the issue As an aside, if you comment out just the final shuffle, the output from interleave_datasets is not shuffled at all even though there's the shuffle before it. So something about that shuffle config is not being propagated to interleave_datasets. ### Expected behavior The script should not freeze. ### Environment info - `datasets` version: 3.0.0 - Platform: macOS-14.6.1-arm64-arm-64bit - Python version: 3.12.5 - `huggingface_hub` version: 0.24.7 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.6.1 I observed this with 2.21.0 initially, then tried upgrading to 3.0.0 and could still repro.
closed
2024-09-12T18:59:33Z
2024-09-23T09:32:27Z
https://github.com/huggingface/datasets/issues/7147
[]
jonathanasdf
6
thtrieu/darkflow
tensorflow
618
How to show one label only even though it detects all the objects
Hi Thtrieu, How to show only one label? Let's say for example, I am using tiny-yolo model which detects 20 object classes. However instead of showing bounding box i.e. labels for all the object classes, is it possible to show only one object class? Let's say a picture contains Persons, dog, bicycle and cars. Once done it only shows bounding box for the person although it detects all the other objects.
closed
2018-03-08T03:42:37Z
2018-09-07T05:07:02Z
https://github.com/thtrieu/darkflow/issues/618
[]
rezaabdullah
7
geex-arts/django-jet
django
240
custom admin view
Hi, is it possible to add custom admin view not related to any model? I've read in one of the previous topics that this feature is avaiable in dev branch and will be realesed soon .. thanks
open
2017-07-27T16:59:33Z
2018-04-26T17:27:40Z
https://github.com/geex-arts/django-jet/issues/240
[]
kinastowski
3
httpie/http-prompt
api
113
Swagger integration improvements
@eliangcs , Thank you for your good job with initial integration with `swagger` specs. I had time to put `swagger` feature into practice and think there could be made several improvements which would increase overall usability as swagger specifications contain much more information that could be displayed to an user. The following are general ideas how we could improve swagger integration. * Make use of `host` & `basePath` swagger root properties. Basically call `http://localhost:8000> cd $host + $basePath` by default . * Autocompletion of `ls` & `cd` cmds is quite handy as it is. However, the "Endpoint" static value is currently showed everywhere no matter whether current path is an actual endpoint or just part of one or more endpoints. Furthermore, useful would be to show trimmed `summary` string value in autocompletion popup when we want to initiate the request. Consider the following example: ```bash http://localhost:8000/users/fogine/apps> post ``` When I type the request method (post), I'd get "Creates new app" instead of current "POST request" autocompletion value. Also we could show autocompletion of url segments in case of inline endpoint definition: Example: ```bash http://localhost:8000/users/> post fogine/apps ``` while typing the "fogine/apps" path segment, it'd show autocompletion of possible url segments and for the last url segment (apps) I could get completion value of "Creates new app" string. I think this would really improve user experience although I haven't considered possible implementation issues so far. * The `ls` command could give so much more information to an user. Consider the following list of endpoints that web service implements: * PUT `/users/{username}` * POST `/users/{username}` * PUT `/users/{username}/avatar` Now in http-prompt I'd just show the data: ```bash http://localhost:8000> ls POST /users/{username} Creates an user PUT /users/{username} Updates an user PUT /users/{username}/avatar Upload avatar http://localhost:8000> ls users POST {username} Creates an user PUT {username} Updates an user PUT {username}/avatar Upload avatar ``` Here, sample endpoints are rather simple but for more complicated routes, the `summary` and possible request methods of given endpoint provide very useful info. There is more information to show like endpoint tags, operationId. Table heading and columns could be configurable... `ls` command could also work with `operationId` swagger option as an endpoint which implements more than one request method can't uniquely identify itself. When an user would `ls` an actual endpoint, eg: ```bash http://localhost:8000> ls urlOperationIdOrUniqueUrl ``` we could display detailed endpoint information, like route `description`, response http codes, for request parameters - parameter type, information about whether a parameter is required or not etc... To conclude where I'm heading with this, the idea is to be able to use `http-prompt` with swagger specs without a need for actually looking for information elsewhere like now it's required for more complicated web services. Note: I haven't thought much about the actual data output formats, so that should be thought out. What do you think @eliangcs or anybody else?
open
2017-04-08T15:15:43Z
2019-09-17T19:55:03Z
https://github.com/httpie/http-prompt/issues/113
[ "enhancement" ]
fogine
7
junyanz/pytorch-CycleGAN-and-pix2pix
deep-learning
1,142
Testing pix2pix model getting runtimeError: Sizes of tensors must match...
I trained a pix2pix model using these settings `--model pix2pix --gpu_ids -1 --input_nc 1 --output_nc 1 --load_size 768 --crop_size 768 --preprocess crop`. And, i want to now test the model with `--preprocess none` so instead of cropping the full image is run through the model. The flags for command for testing I ran is this: `--model pix2pix --gpu_ids -1 --input_nc 1 --output_nc 1 --preprocess none`. But, I'm getting an error : ``` The image size needs to be a multiple of 4. The loaded image size was (894, 1300), so it was adjusted to (896, 1300). This adjustment will be done to all images whose sizes are not multiples of 4 Traceback (most recent call last): File "test.py", line 63, in <module> model.test() # run inference File "/Users/kaungkhant/Desktop/pytorch-CycleGAN-and-pix2pix/models/base_model.py", line 105, in test self.forward() File "/Users/kaungkhant/Desktop/pytorch-CycleGAN-and-pix2pix/models/pix2pix_model.py", line 88, in forward self.fake_B = self.netG(self.real_A) # G(A) File "/Users/kaungkhant/.pyenv/versions/3.8.5/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/Users/kaungkhant/Desktop/pytorch-CycleGAN-and-pix2pix/models/networks.py", line 465, in forward return self.model(input) File "/Users/kaungkhant/.pyenv/versions/3.8.5/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/Users/kaungkhant/Desktop/pytorch-CycleGAN-and-pix2pix/models/networks.py", line 533, in forward return self.model(x) File "/Users/kaungkhant/.pyenv/versions/3.8.5/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/Users/kaungkhant/.pyenv/versions/3.8.5/lib/python3.8/site-packages/torch/nn/modules/container.py", line 117, in forward input = module(input) File "/Users/kaungkhant/.pyenv/versions/3.8.5/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/Users/kaungkhant/Desktop/pytorch-CycleGAN-and-pix2pix/models/networks.py", line 536, in forward print(self.model(x).size()) File "/Users/kaungkhant/.pyenv/versions/3.8.5/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/Users/kaungkhant/.pyenv/versions/3.8.5/lib/python3.8/site-packages/torch/nn/modules/container.py", line 117, in forward input = module(input) File "/Users/kaungkhant/.pyenv/versions/3.8.5/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/Users/kaungkhant/Desktop/pytorch-CycleGAN-and-pix2pix/models/networks.py", line 536, in forward print(self.model(x).size()) File "/Users/kaungkhant/.pyenv/versions/3.8.5/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/Users/kaungkhant/.pyenv/versions/3.8.5/lib/python3.8/site-packages/torch/nn/modules/container.py", line 117, in forward input = module(input) File "/Users/kaungkhant/.pyenv/versions/3.8.5/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/Users/kaungkhant/Desktop/pytorch-CycleGAN-and-pix2pix/models/networks.py", line 536, in forward print(self.model(x).size()) File "/Users/kaungkhant/.pyenv/versions/3.8.5/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/Users/kaungkhant/.pyenv/versions/3.8.5/lib/python3.8/site-packages/torch/nn/modules/container.py", line 117, in forward input = module(input) File "/Users/kaungkhant/.pyenv/versions/3.8.5/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/Users/kaungkhant/Desktop/pytorch-CycleGAN-and-pix2pix/models/networks.py", line 536, in forward print(self.model(x).size()) File "/Users/kaungkhant/.pyenv/versions/3.8.5/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/Users/kaungkhant/.pyenv/versions/3.8.5/lib/python3.8/site-packages/torch/nn/modules/container.py", line 117, in forward input = module(input) File "/Users/kaungkhant/.pyenv/versions/3.8.5/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/Users/kaungkhant/Desktop/pytorch-CycleGAN-and-pix2pix/models/networks.py", line 536, in forward print(self.model(x).size()) File "/Users/kaungkhant/.pyenv/versions/3.8.5/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/Users/kaungkhant/.pyenv/versions/3.8.5/lib/python3.8/site-packages/torch/nn/modules/container.py", line 117, in forward input = module(input) File "/Users/kaungkhant/.pyenv/versions/3.8.5/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/Users/kaungkhant/Desktop/pytorch-CycleGAN-and-pix2pix/models/networks.py", line 536, in forward print(self.model(x).size()) File "/Users/kaungkhant/.pyenv/versions/3.8.5/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/Users/kaungkhant/.pyenv/versions/3.8.5/lib/python3.8/site-packages/torch/nn/modules/container.py", line 117, in forward input = module(input) File "/Users/kaungkhant/.pyenv/versions/3.8.5/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/Users/kaungkhant/Desktop/pytorch-CycleGAN-and-pix2pix/models/networks.py", line 538, in forward return torch.cat([x, self.model(x)], 1) RuntimeError: Sizes of tensors must match except in dimension 1. Got 7 and 6 in dimension 3 (The offending index is 1) ``` The testing works when I crop the test images to a multiple of 256. But, I want to be able to not have any preprocessing done on the inputs. Could you help me solve this issue?
open
2020-09-05T21:43:13Z
2020-10-28T11:44:39Z
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1142
[]
justkhant
2
quasarstream/python-ffmpeg-video-streaming
dash
56
metadata not preserving
@aminyazdanpanah the meta data not preserving in the output as same as input. Is there any option to put -map_metadata 0 to copy global meta data details to any output.
closed
2021-05-09T14:04:42Z
2021-12-03T11:48:59Z
https://github.com/quasarstream/python-ffmpeg-video-streaming/issues/56
[]
vijaykumarchettiar
1
python-gitlab/python-gitlab
api
2,218
URL-encoded kwargs are infinitely duplicated in query params (v3.7.0 regression for arrays)
## Description of the problem, including code/CLI snippet With #1699 we added array params for query params in GET requests. But GitLab also returns that param, URL-encoded, in the `next` URL which we then follow in listing endpoints. Because one is URL-encoded and the other isn't, the param gets added at the end of the query again. With `list()` calls this happens over and over again and causes 502 due to super long query strings eventually. minimal reproduction: ```python import gitlab gl = gitlab.Gitlab() gl.enable_debug() projects = gl.projects.list(topics=["python"], per_page=1, iterator=True) for project in projects: # watch the `send` URL grow continuously until triggering 502 pass ``` ## Expected Behavior If a kwarg/param (URL-encoded or not) that we provide is already in the provided URL, it should never be duplicated when we construct the final request URL. ## Actual Behavior `http_request()` and requests can't tell if our kwargs match those already in the URL params and they get duplicated to oblivion, causing 502 once the URL query is too long for the server. ## Specifications - python-gitlab version: v3.7.0 - API version you are using (v3/v4): v4 - Gitlab server version (or gitlab.com): all
closed
2022-08-04T13:31:26Z
2023-08-07T01:23:13Z
https://github.com/python-gitlab/python-gitlab/issues/2218
[ "bug" ]
nejch
1
gunthercox/ChatterBot
machine-learning
1,618
How to disable nltk_data command line output?
Hi. I'm developing a chatbot with this library. I was wondering if there's any way to suppress the output to the command line related to **nltk_data** every time I run my program. I am talking about this: ``` [nltk_data] Downloading package averaged_perceptron_tagger to [nltk_data] /home/sgeor/nltk_data... [nltk_data] Package averaged_perceptron_tagger is already up-to- [nltk_data] date! [nltk_data] Downloading package punkt to /home/sgeor/nltk_data... [nltk_data] Package punkt is already up-to-date! [nltk_data] Downloading package stopwords to /home/sgeor/nltk_data... [nltk_data] Package stopwords is already up-to-date! ``` I am using flask for the frontend of my app and every time the server is restarted (or I run any of the ``` flask do_something``` commands I have implemented) this appears in the console. Thanks.
closed
2019-02-12T18:55:18Z
2025-02-18T22:50:15Z
https://github.com/gunthercox/ChatterBot/issues/1618
[ "feature" ]
Ligh7bringer
12
FlareSolverr/FlareSolverr
api
569
Stealth plugin?
> When some request arrives, it uses puppeteer with the **stealth plugin** Am I missing something or despite what the readme suggests this doesn't actually use the puppeteer stealth addon? If it did it'd be ``` // puppeteer-extra is a drop-in replacement for puppeteer, // it augments the installed puppeteer with plugin functionality const puppeteer = require('puppeteer-extra') // add stealth plugin and use defaults (all evasion techniques) const StealthPlugin = require('puppeteer-extra-plugin-stealth') puppeteer.use(StealthPlugin()) ``` instead of what we have `import {Page, HTTPResponse} from 'puppeteer'` Right? I only mention because puppeter-stealth **claim** they can currently bypass all capchas etc, which I'm assuming includes the new cloudflare 'check if secure' page Well just food for thought, other than a few sites I can't scrape anymore FlareSolverr is still working find for me
closed
2022-10-27T23:53:52Z
2023-01-05T02:22:34Z
https://github.com/FlareSolverr/FlareSolverr/issues/569
[]
Deathnetworks
1
deepspeedai/DeepSpeed
deep-learning
6,889
Using zero3 on multiple nodes is slow
I have multiple nodes, each with 8 40G A100, and I want to train a 72B model When using zero3, the 72B model is distributed to all GPUs of all nodes. Even with nvlink, the communication delay is still very high, resulting in slow training speed, much slower than using zero3+offloading for a single node. The problem is that the more nodes there are, the slower the training speed. It is better to use only a single node Is there a way to control zero3 to only allocate model parameters to the same node, where each node stores a complete model and only uses synchronous gradients between nodes to speed up training
open
2024-12-18T03:43:41Z
2025-01-22T14:25:07Z
https://github.com/deepspeedai/DeepSpeed/issues/6889
[ "bug", "training" ]
HelloWorld506
8
piskvorky/gensim
machine-learning
2,586
gensim.summarization.keywords fetching different results
<!-- **IMPORTANT**: - Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports. - Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers. Github bug reports that do not include relevant information and context will be closed without an answer. Thanks! --> #### Problem description Hi, I am having a weird issue where when I pass the exact same text in the following function - ```python gensim.summarization.keywords(text1, ratio=0.9, pos_filter=('NP')).split("\n") ``` and get two different result set for exact same parameters when I run it multiple times. The output should be same for a particular text. How is it possible that it's excluding /including few phrase extracts over a few iteration? Below it shows the difference - ['data'] vs ['static data'] and ['dynamic'] was not fetched in the second iter run at all. Attached a screenshot for reference. Any guidance will be appreciated. ![gensim_summarization_diffresults](https://user-images.githubusercontent.com/20959591/63968956-cdd4cf00-ca6e-11e9-8a36-ca5e152d13c2.png) #### Steps/code/corpus to reproduce ```python import gensim text1 = 'The method according to claim3, wherein the step of collecting further comprises: receiving the static data in the management data through a notification about change of the at least one cloud server being reported by a protocol agent which is configured to collect the management data from the at least one cloud server; and requesting and receiving the dynamic data in the management data from the protocol agent.' phrase_token=gensim.summarization.keywords(text1, ratio=0.9, pos_filter=('NP')).split("\n") phrase_token ``` #### Versions Darwin-18.7.0-x86_64-i386-64bit Python 3.7.3 (default, Mar 27 2019, 16:54:48) [Clang 4.0.1 (tags/RELEASE_401/final)] NumPy 1.16.4 SciPy 1.2.1 gensim 3.7.3 FAST_VERSION 1
closed
2019-08-29T19:16:52Z
2019-09-28T13:44:53Z
https://github.com/piskvorky/gensim/issues/2586
[]
JayeetaP
7
SciTools/cartopy
matplotlib
2,055
cartopy doesn't seem to install, even with workaround
### Description <!-- Please provide a general introduction to the issue/proposal. --> my process: C:\Windows\System32>pip install --upgrade --no-cache-dir --use-deprecated=legacy-resolver --user cartopy Collecting cartopy Downloading Cartopy-0.20.2.tar.gz (10.8 MB) ---------------------------------------- 10.8/10.8 MB 6.4 MB/s eta 0:00:00 Installing build dependencies ... done Getting requirements to build wheel ... error error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─> [3 lines of output] setup.py:117: UserWarning: Unable to determine GEOS version. Ensure you have 3.7.2 or later installed, or installation may fail. warnings.warn( Proj 8.0.0 must be installed. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─> [3 lines of output] setup.py:117: UserWarning: Unable to determine GEOS version. Ensure you have 3.7.2 or later installed, or installation may fail. warnings.warn( Proj 8.0.0 must be installed. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. <!-- If you are reporting a bug, attach the *entire* traceback from Python. If you are proposing an enhancement/new feature, provide links to related articles, reference examples, etc. If you are asking a question, please ask on StackOverflow and use the cartopy tag. All cartopy questions on StackOverflow can be found at https://stackoverflow.com/questions/tagged/cartopy --> #### Code to reproduce pip install --upgrade --no-cache-dir --use-deprecated=legacy-resolver --user cartopy ``` ``` #### Traceback ``` ``` <details> <summary>Full environment definition</summary> <!-- fill in the following information as appropriate --> ### Operating system windows 11 ### Cartopy version ### conda list ``` ``` ### pip list pip install --upgrade --no-cache-dir --use-deprecated=legacy-resolver --user cartopy ``` ``` </details>
closed
2022-06-26T23:02:44Z
2022-06-29T13:09:11Z
https://github.com/SciTools/cartopy/issues/2055
[ "Component: installation" ]
STomGu
4
serengil/deepface
deep-learning
1,161
Why does VGG-Face end with convolutional layers instead of Dense/Fully-Connected layers?
:question:
closed
2024-04-01T23:28:14Z
2024-04-02T10:28:22Z
https://github.com/serengil/deepface/issues/1161
[ "question" ]
FaaizMemonPurdue
4
dynaconf/dynaconf
django
694
Validate or set default value in nested settings list
Hi, First of all this is really a useful tool and I'd like for my project to use it in production. We are having a hardware setup that is depicted as infrastructure as code: ``` [VM-009] ID = "VM-009" IP = "192.168.56.109" [VM-009.VM_CONFIG] CPU = 1 RAM = "256" [VM-009.COMPONENTS.DUTS.DUT-009] ID = "DUT-009" DESCRIPTION = "" [VM-009.COMPONENTS.DUTS.DUT-010] ID = "DUT-010" [VM-009.COMPONENTS.CONNECT.MH-GCS] ID = "MH-GCS" IP = "127.0.0.1" [VM-009.COMPONENTS.CONNECT.MH-DUT-009] ID = "MH-DUT-009" [VM-009.COMPONENTS.CONNECT.MH-DUT-010] ID = "MH-DUT-010" ``` Now I'd like to dynamically add/validate default values to the nested objects in `VM-009.COMPONENTS.DUTS` and `VM-009.COMPONENTS.CONNECT`. Is there already a way within the Validator class implemented, which could solve this? something like: `Validator("COMPONENTS.DUTS[*].TEST_CONFIG", default="a default value for TESTED_CONFIG in all the nested object in COMPONENTS.DUTS"),` Prototyping or the documentation couldn't give me a good answer. Or is there another way or best practice to solve this? My next step would be to programmatically attack the problem, but would really like to avoid that. Side question: Is there a native way to call up the validation via a separate .toml file method in python or do you need to use the CLI? I'd love to see more support for that workflow. For example conditional validation (and of course the issue with the nested object explained above). Best, Mischa
open
2021-11-29T19:14:15Z
2022-09-21T17:12:10Z
https://github.com/dynaconf/dynaconf/issues/694
[ "question" ]
MischaZihler
0
jmcnamara/XlsxWriter
pandas
1,081
Should this qualify as a bug report?
### Question I had my first experience (so take all this with a huge grain of noob salt) adding formulas to a created excel doc, and I was a bit surprised how much debugging was involved for formulas (initially developed and working using Excel in Office 365 (Version 16.86 (24060916) on a Mac). I had encountered a number of difficulties. Some may be actual issues that could be fixed in `xlsxwriter` and some could just be fixed by me learning how to use your excellent package better - sorry if this is all the latter case - I just know that I myself value first-impression feedback, so I'm just providing this if you do too). I had expected my work on this yesterday to take perhaps an hour or 2, but it ended up taking all day. I did eventually arrive at a well functioning solution, but only after using your debug documentation that employs unzipping and xmllint. Difficulties I had to tackle: - Errors from excel (resulting in it "repairing" the file by removing various formulas) - `#NAME?` errors - Inconsistent application of single-value `@` characters in front of functions (which appeared to be related to performance issues). - Dramatic (excel) performance issues in both attempting to open a file (either quietly failing to open or only opening after clicking any excel menu) and gradual random appearances of `#CALC?` errors after opening. (There was also a big performance difference using `AND()` versus `OR(NOT()...)`, but I think that may go away after applying function prefixes?) - Excel errors about values being too long (on empty sheets 0 i.e. nothing to calculate with the formula) I suspect it all boils down to the need to apply function prefixes (like `_xlfn.` and others), because after manually adding them in the places they differed when I compared a function manually pasted into excel to the one that `xlsxwriter` generated, it seemed to fix most/all of the issues (once I found the documentation that explained how to do this). The following are all 4 of the formulas I eventually settled on after making fixes and adding niceties (like using `IF(OR(ISBLANK...` to handle empty rows and stuff (that unfortunately made them hard to read, so I'm supplying commented versions too). I initially started out with simpler versions, that I then modified once I got things working smoothly, but adding these as-is (aside from adding the documented lambda variable prefix) leads to a number of the problems described above and requires debugging to add missing prefixes. My hope is that you find these useful (maybe not?): (Note: the "commented versions" of the formulas below use f-strings which end up being used with `.format(a_dict)` to convert the variables into their column letters. And I used `ROW()` instead of literal row numbers to make it easier to use the same exact formula on every row): - The "Infusate Name" column (in a sheet named "Infusates"): - `=IF(OR(NOT(ISBLANK(INDIRECT("A" & ROW()))),NOT(ISBLANK(INDIRECT("B" & ROW()))),NOT(ISBLANK(INDIRECT("C" & ROW()))),NOT(ISBLANK(INDIRECT("D" & ROW()))),),CONCATENATE(IF(ISBLANK(INDIRECT("B" & ROW())),"",CONCATENATE(INDIRECT("B" & ROW())," ")),IF(ROWS(FILTER(A:A,A:A=INDIRECT("A" & ROW()),""))>1,"{",""),TEXTJOIN(";",TRUE,SORT(MAP(FILTER(C:C,A:A=INDIRECT("A" & ROW()),""),FILTER(D:D,A:A=INDIRECT("A" & ROW()), ""),LAMBDA(a,b,CONCATENATE(a,"[",b,"]"))))),IF(ROWS(FILTER(A:A,A:A=INDIRECT("A" & ROW()),""))>1,"}","")),"")` - A commented version: ``` # If there's any data on the row "=IF(" # "OR" makes excel significantly more responsive "OR(" f'NOT(ISBLANK(INDIRECT("{{{ID_KEY}}}" & ROW()))),' f'NOT(ISBLANK(INDIRECT("{{{GROUP_NAME_KEY}}}" & ROW()))),' f'NOT(ISBLANK(INDIRECT("{{{TRACER_NAME_KEY}}}" & ROW()))),' f'NOT(ISBLANK(INDIRECT("{{{CONC_KEY}}}" & ROW()))),' ")," # Build the infusate name "CONCATENATE(" # If there is a tracer group name f'IF(ISBLANK(INDIRECT("{{{GROUP_NAME_KEY}}}" & ROW())),"",' # Insert the tracer group name f'CONCATENATE(INDIRECT("{{{GROUP_NAME_KEY}}}" & ROW())," ")),' # If there is more than 1 tracer with this row group ID, include an opening curly brace f"IF(ROWS(_xlfn._xlws.FILTER({{{ID_KEY}}}:{{{ID_KEY}}},{{{ID_KEY}}}:{{{ID_KEY}}}=" f'INDIRECT("{{{ID_KEY}}}" & ROW()),""))>1,"{{{{",""),' # Join the sorted tracers (from multiple rows) using a ';' delimiter, (mapping the names and concs) '_xlfn.TEXTJOIN(";",TRUE,_xlfn._xlws.SORT(_xlfn.MAP(' # Filter all tracer names to get ones whose tracer row group ID is the same as as this row's group ID f"_xlfn._xlws.FILTER({{{TRACER_NAME_KEY}}}:{{{TRACER_NAME_KEY}}}," f'{{{ID_KEY}}}:{{{ID_KEY}}}=INDIRECT("{{{ID_KEY}}}" & ROW()),""),' # Filter all concentrations to get ones whose tracer row group ID is the same as as this row's group ID f"_xlfn._xlws.FILTER({{{CONC_KEY}}}:{{{CONC_KEY}}}," f'{{{ID_KEY}}}:{{{ID_KEY}}}=INDIRECT("{{{ID_KEY}}}" & ROW()), ""),' # Apply this lambda to the tracer names and concentrations filtered above to concatenate the tracers and # their concentrations '_xlfn.LAMBDA(_xlpm.a,_xlpm.b,CONCATENATE(_xlpm.a,"[",_xlpm.b,"]"))))),' # If there is more than 1 tracer for this row group ID, include a closing curly brace f"IF(ROWS(_xlfn._xlws.FILTER({{{ID_KEY}}}:{{{ID_KEY}}},{{{ID_KEY}}}:{{{ID_KEY}}}=" f'INDIRECT("{{{ID_KEY}}}" & ROW()),""))>1,"}}}}","")),' '"")' ``` - Example: - ![infeg](https://github.com/jmcnamara/XlsxWriter/assets/2300532/dff76fd3-20bf-4123-935a-82bd5f4bfe42) - The "Tracer Name" column in a sheet named "Tracers": - As it appears in excel: - `=IF(AND(ISBLANK(INDIRECT("B" & ROW())),ISBLANK(INDIRECT("C" & ROW())),ISBLANK(INDIRECT("D" & ROW())),ISBLANK(INDIRECT("E" & ROW())),ISBLANK(INDIRECT("F" & ROW())),ISBLANK(INDIRECT("A" & ROW()))),"",CONCATENATE(INDIRECT("B" & ROW()),"-[",TEXTJOIN(",",TRUE,SORT(MAP(FILTER(C:C,A:A=INDIRECT("A" & ROW()), ""),FILTER(D:D,A:A=INDIRECT("A" & ROW()), ""),FILTER(E:E,A:A=INDIRECT("A" & ROW()), ""),FILTER(F:F,A:A=INDIRECT("A" & ROW()), ""),LAMBDA(mass,elem,cnt,poss, CONCATENATE(IF(ISBLANK(poss),"",poss), IF(ISBLANK(poss),"","-"), mass, elem, cnt))))),"]"))` - A commented version: ``` # If all columns are empty, return an empty string "=IF(" "AND(" f'ISBLANK(INDIRECT("{{{COMPOUND_KEY}}}" & ROW())),' f'ISBLANK(INDIRECT("{{{MASSNUMBER_KEY}}}" & ROW())),' f'ISBLANK(INDIRECT("{{{ELEMENT_KEY}}}" & ROW())),' f'ISBLANK(INDIRECT("{{{LABELCOUNT_KEY}}}" & ROW())),' f'ISBLANK(INDIRECT("{{{LABELPOSITIONS_KEY}}}" & ROW())),' f'ISBLANK(INDIRECT("{{{ID_KEY}}}" & ROW()))' '),"",' # Otherwise, build the tracer name "CONCATENATE(" # Start with the compound f'INDIRECT("{{{COMPOUND_KEY}}}" & ROW()),' # Wrap the labels in "-[]" '"-[",' # Join all the label strings with "," '_xlfn.TEXTJOIN(",",TRUE,_xlfn._xlws.SORT(_xlfn.MAP(' # Include mass number from every row for this group f"_xlfn._xlws.FILTER({{{MASSNUMBER_KEY}}}:{{{MASSNUMBER_KEY}}}," f'{{{ID_KEY}}}:{{{ID_KEY}}}=INDIRECT("{{{ID_KEY}}}" & ROW()), ""),' # Include element from every row for this group f"_xlfn._xlws.FILTER({{{ELEMENT_KEY}}}:{{{ELEMENT_KEY}}}," f'{{{ID_KEY}}}:{{{ID_KEY}}}=INDIRECT("{{{ID_KEY}}}" & ROW()), ""),' # Include count from every row for this group f"_xlfn._xlws.FILTER({{{LABELCOUNT_KEY}}}:{{{LABELCOUNT_KEY}}}," f'{{{ID_KEY}}}:{{{ID_KEY}}}=INDIRECT("{{{ID_KEY}}}" & ROW()), ""),' # Include positions from every row for this group f"_xlfn._xlws.FILTER({{{LABELPOSITIONS_KEY}}}:{{{LABELPOSITIONS_KEY}}}," f'{{{ID_KEY}}}:{{{ID_KEY}}}=INDIRECT("{{{ID_KEY}}}" & ROW()), ""),' # Build each label string using a lambda "_xlfn._xlws.LAMBDA(_xlpm.mass, _xlpm.elem, _xlpm.cnt, _xlpm.poss, " # Concatenate the label elements "CONCATENATE(" # If the positions string is empty, return an empty string, otherwise, return the positions string 'IF(ISBLANK(_xlpm.poss),"",_xlpm.poss), IF(ISBLANK(_xlpm.poss),"","-"), ' # And just straight-up join the other columns as-is "_xlpm.mass, _xlpm.elem, _xlpm.cnt))" ")))," # Close off the square bracket in the encompassing "-[]" '"]"' "))" ``` - Example: - ![trceg](https://github.com/jmcnamara/XlsxWriter/assets/2300532/2b8d219a-19b7-4c86-8cd3-819a796f2b7c) - The "Sequence Name" column (in a sheet named "Sequences"): - As it appears in excel: - `=IF(OR(NOT(ISBLANK(@INDIRECT("B" & ROW()))),NOT(ISBLANK(@INDIRECT("C" & ROW()))),NOT(ISBLANK(@INDIRECT("D" & ROW()))),NOT(ISBLANK(@INDIRECT("E" & ROW()))),),TEXTJOIN(", ", FALSE, INDIRECT("B" & ROW()), INDIRECT("C" & ROW()), INDIRECT("D" & ROW()), IF(ISBLANK(@INDIRECT("E" & ROW())),"",TEXT(@INDIRECT("E" & ROW()),"yyyy-mm-dd"))),"")` - A commented version: ``` "=IF(" "OR(" f'NOT(ISBLANK(INDIRECT("{{{OPERATOR_KEY}}}" & ROW()))),' f'NOT(ISBLANK(INDIRECT("{{{LCNAME_KEY}}}" & ROW()))),' f'NOT(ISBLANK(INDIRECT("{{{INSTRUMENT_KEY}}}" & ROW()))),' f'NOT(ISBLANK(INDIRECT("{{{DATE_KEY}}}" & ROW()))),' ")," f'_xlfn.TEXTJOIN("{MSRunSequence.SEQNAME_DELIMITER} ", FALSE, ' f'INDIRECT("{{{OPERATOR_KEY}}}" & ROW()), ' f'INDIRECT("{{{LCNAME_KEY}}}" & ROW()), ' f'INDIRECT("{{{INSTRUMENT_KEY}}}" & ROW()), ' # If the date is blank, return empty string f'IF(ISBLANK(INDIRECT("{{{DATE_KEY}}}" & ROW())),"",' # Otherwise, format the date (because excel returns an encoded number) f'TEXT(INDIRECT("{{{DATE_KEY}}}" & ROW()),"yyyy-mm-dd"))' '),"")' ``` - Example: - ![seqeg](https://github.com/jmcnamara/XlsxWriter/assets/2300532/3ab1ce9b-9aaf-477c-8bb8-eab86b85f635) - The "Name" column (in a sheet named "LC Protocols"): - This one worked out of the box and I didn't have as much trouble with it, but I was surprised that the `@` symbol appeared in front of the `INDIRECT` methods (and not ever in any of the other formulas' usages of `INDIRECT`) in excel: - `=IF(OR(ISBLANK(@INDIRECT("B" & ROW())),ISBLANK(@INDIRECT("C" & ROW()))),"",CONCATENATE(@INDIRECT("B" & ROW()),"-",@INDIRECT("C" & ROW()),"-min"))` - A commented version: ``` # If either the type of runlen is blank, return an empty string "=IF(" "OR(" f'ISBLANK(INDIRECT("{{{TYPE_KEY}}}" & ROW())),' f'ISBLANK(INDIRECT("{{{RUNLEN_KEY}}}" & ROW()))' '),"",' # Otherwise, construct the protocol name, using concatenation f"CONCATENATE(" f'INDIRECT("{{{TYPE_KEY}}}" & ROW()),' '"-",' f'INDIRECT("{{{RUNLEN_KEY}}}" & ROW()),' '"-min"' "))" ``` - Example: - ![lceg](https://github.com/jmcnamara/XlsxWriter/assets/2300532/55215d9d-6983-48a8-b71e-29cd0200bca3)
closed
2024-07-11T15:01:18Z
2024-07-12T08:10:31Z
https://github.com/jmcnamara/XlsxWriter/issues/1081
[ "question" ]
hepcat72
2
huggingface/pytorch-image-models
pytorch
2,301
HOW CAN I Download vit_huge_patch14_224_in21k.pth
I try to use your VIT model to train my dataset,my value acc is about 82%,but I want to use vit_huge_patch14_224_in21k.pth or Modify the patch size directly,I don't know if it's feasible.By the way,my dataset is about bridge damage.
closed
2024-10-14T02:20:29Z
2024-10-14T17:31:15Z
https://github.com/huggingface/pytorch-image-models/issues/2301
[ "enhancement" ]
twisti14
0
numba/numba
numpy
9,529
Wrong list result with parallel=True despite no resizing and no cross-thread access of the list
<!-- Thanks for opening an issue! To help the Numba team handle your information efficiently, please first ensure that there is no other issue present that already describes the issue you have (search at https://github.com/numba/numba/issues?&q=is%3Aissue). --> ## Reporting a bug <!-- Before submitting a bug report please ensure that you can check off these boxes: --> - [x] I have tried using the latest released version of Numba (most recent is visible in the release notes (https://numba.readthedocs.io/en/stable/release-notes-overview.html). - [x] I have included a self contained code sample to reproduce the problem. i.e. it's possible to run as 'python bug.py'. ```python from numba.typed import Dict, List from numba import prange, njit, types import numpy as np def weird(inputs): outputs = [(Dict.empty(key_type=types.int64, value_type=types.float64), np.zeros(1))] * len(inputs) for i in prange(len(inputs)): y = inputs[i] out = np.zeros(1) out[0] = i outputs[i] = (inputs[i], out) return outputs inputs = [Dict.empty(key_type=types.int64, value_type=types.float64) for i in range(9)] outputs = weird(inputs) parallel_outputs = njit(parallel=True)(weird)(inputs) print(outputs[0][1][0], parallel_outputs[0][1][0]) ``` prints ``` 0.0 1.0 ``` The output of the `weird` function is a `List[(Dict, array)]`, and each array is supposed to just hold the corresponding loop index. In the non-parallel execution the zeroth entry of the output list is `(somedict, [0.0])`, as it should. In the parallel execution, the zeroth entry of the out output list is incorrectly equal to the first entry of the output list, `(somedict, [1.0])`. This stops happening if - I reduce the input from length `9` to an `8` (on my laptop with 8 logical cores) - I remove the `Dict` component of the `outputs` tuples. - PS: I'd appreciate if you know of a way to circumvent this in the meantime. In my real problem, I have quite the messy datatype in the list. Python version: '3.12.2 (main, Feb 25 2024, 16:36:57) [GCC 9.4.0]' numba version: 0.59.1 numpy version: 1.26.4 OS: NAME="Ubuntu" VERSION="20.04.6 LTS (Focal Fossa)" ``` processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 140 model name : 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz stepping : 1 microcode : 0xb4 cpu MHz : 2400.000 cache size : 8192 KB physical id : 0 siblings : 8 core id : 0 cpu cores : 4 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 27 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 invpcid_single cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves split_lock_detect dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid movdiri movdir64b fsrm avx512_vp2intersect md_clear flush_l1d arch_capabilities vmx flags : vnmi preemption_timer posted_intr invvpid ept_x_only ept_ad ept_1gb flexpriority apicv tsc_offset vtpr mtf vapic ept vpid unrestricted_guest vapic_reg vid ple pml ept_mode_based_exec tsc_scaling bugs : spectre_v1 spectre_v2 spec_store_bypass swapgs eibrs_pbrsb gds bogomips : 4838.40 clflush size : 64 cache_alignment : 64 address sizes : 39 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 140 model name : 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz stepping : 1 microcode : 0xb4 cpu MHz : 2400.000 cache size : 8192 KB physical id : 0 siblings : 8 core id : 1 cpu cores : 4 apicid : 2 initial apicid : 2 fpu : yes fpu_exception : yes cpuid level : 27 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 invpcid_single cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves split_lock_detect dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid movdiri movdir64b fsrm avx512_vp2intersect md_clear flush_l1d arch_capabilities vmx flags : vnmi preemption_timer posted_intr invvpid ept_x_only ept_ad ept_1gb flexpriority apicv tsc_offset vtpr mtf vapic ept vpid unrestricted_guest vapic_reg vid ple pml ept_mode_based_exec tsc_scaling bugs : spectre_v1 spectre_v2 spec_store_bypass swapgs eibrs_pbrsb gds bogomips : 4838.40 clflush size : 64 cache_alignment : 64 address sizes : 39 bits physical, 48 bits virtual power management: processor : 2 vendor_id : GenuineIntel cpu family : 6 model : 140 model name : 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz stepping : 1 microcode : 0xb4 cpu MHz : 2400.000 cache size : 8192 KB physical id : 0 siblings : 8 core id : 2 cpu cores : 4 apicid : 4 initial apicid : 4 fpu : yes fpu_exception : yes cpuid level : 27 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 invpcid_single cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves split_lock_detect dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid movdiri movdir64b fsrm avx512_vp2intersect md_clear flush_l1d arch_capabilities vmx flags : vnmi preemption_timer posted_intr invvpid ept_x_only ept_ad ept_1gb flexpriority apicv tsc_offset vtpr mtf vapic ept vpid unrestricted_guest vapic_reg vid ple pml ept_mode_based_exec tsc_scaling bugs : spectre_v1 spectre_v2 spec_store_bypass swapgs eibrs_pbrsb gds bogomips : 4838.40 clflush size : 64 cache_alignment : 64 address sizes : 39 bits physical, 48 bits virtual power management: processor : 3 vendor_id : GenuineIntel cpu family : 6 model : 140 model name : 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz stepping : 1 microcode : 0xb4 cpu MHz : 2400.000 cache size : 8192 KB physical id : 0 siblings : 8 core id : 3 cpu cores : 4 apicid : 6 initial apicid : 6 fpu : yes fpu_exception : yes cpuid level : 27 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 invpcid_single cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves split_lock_detect dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid movdiri movdir64b fsrm avx512_vp2intersect md_clear flush_l1d arch_capabilities vmx flags : vnmi preemption_timer posted_intr invvpid ept_x_only ept_ad ept_1gb flexpriority apicv tsc_offset vtpr mtf vapic ept vpid unrestricted_guest vapic_reg vid ple pml ept_mode_based_exec tsc_scaling bugs : spectre_v1 spectre_v2 spec_store_bypass swapgs eibrs_pbrsb gds bogomips : 4838.40 clflush size : 64 cache_alignment : 64 address sizes : 39 bits physical, 48 bits virtual power management: processor : 4 vendor_id : GenuineIntel cpu family : 6 model : 140 model name : 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz stepping : 1 microcode : 0xb4 cpu MHz : 2400.000 cache size : 8192 KB physical id : 0 siblings : 8 core id : 0 cpu cores : 4 apicid : 1 initial apicid : 1 fpu : yes fpu_exception : yes cpuid level : 27 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 invpcid_single cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves split_lock_detect dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid movdiri movdir64b fsrm avx512_vp2intersect md_clear flush_l1d arch_capabilities vmx flags : vnmi preemption_timer posted_intr invvpid ept_x_only ept_ad ept_1gb flexpriority apicv tsc_offset vtpr mtf vapic ept vpid unrestricted_guest vapic_reg vid ple pml ept_mode_based_exec tsc_scaling bugs : spectre_v1 spectre_v2 spec_store_bypass swapgs eibrs_pbrsb gds bogomips : 4838.40 clflush size : 64 cache_alignment : 64 address sizes : 39 bits physical, 48 bits virtual power management: processor : 5 vendor_id : GenuineIntel cpu family : 6 model : 140 model name : 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz stepping : 1 microcode : 0xb4 cpu MHz : 2400.000 cache size : 8192 KB physical id : 0 siblings : 8 core id : 1 cpu cores : 4 apicid : 3 initial apicid : 3 fpu : yes fpu_exception : yes cpuid level : 27 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 invpcid_single cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves split_lock_detect dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid movdiri movdir64b fsrm avx512_vp2intersect md_clear flush_l1d arch_capabilities vmx flags : vnmi preemption_timer posted_intr invvpid ept_x_only ept_ad ept_1gb flexpriority apicv tsc_offset vtpr mtf vapic ept vpid unrestricted_guest vapic_reg vid ple pml ept_mode_based_exec tsc_scaling bugs : spectre_v1 spectre_v2 spec_store_bypass swapgs eibrs_pbrsb gds bogomips : 4838.40 clflush size : 64 cache_alignment : 64 address sizes : 39 bits physical, 48 bits virtual power management: processor : 6 vendor_id : GenuineIntel cpu family : 6 model : 140 model name : 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz stepping : 1 microcode : 0xb4 cpu MHz : 2400.000 cache size : 8192 KB physical id : 0 siblings : 8 core id : 2 cpu cores : 4 apicid : 5 initial apicid : 5 fpu : yes fpu_exception : yes cpuid level : 27 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 invpcid_single cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves split_lock_detect dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid movdiri movdir64b fsrm avx512_vp2intersect md_clear flush_l1d arch_capabilities vmx flags : vnmi preemption_timer posted_intr invvpid ept_x_only ept_ad ept_1gb flexpriority apicv tsc_offset vtpr mtf vapic ept vpid unrestricted_guest vapic_reg vid ple pml ept_mode_based_exec tsc_scaling bugs : spectre_v1 spectre_v2 spec_store_bypass swapgs eibrs_pbrsb gds bogomips : 4838.40 clflush size : 64 cache_alignment : 64 address sizes : 39 bits physical, 48 bits virtual power management: processor : 7 vendor_id : GenuineIntel cpu family : 6 model : 140 model name : 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz stepping : 1 microcode : 0xb4 cpu MHz : 1049.386 cache size : 8192 KB physical id : 0 siblings : 8 core id : 3 cpu cores : 4 apicid : 7 initial apicid : 7 fpu : yes fpu_exception : yes cpuid level : 27 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 invpcid_single cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves split_lock_detect dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid movdiri movdir64b fsrm avx512_vp2intersect md_clear flush_l1d arch_capabilities vmx flags : vnmi preemption_timer posted_intr invvpid ept_x_only ept_ad ept_1gb flexpriority apicv tsc_offset vtpr mtf vapic ept vpid unrestricted_guest vapic_reg vid ple pml ept_mode_based_exec tsc_scaling bugs : spectre_v1 spectre_v2 spec_store_bypass swapgs eibrs_pbrsb gds bogomips : 4838.40 clflush size : 64 cache_alignment : 64 address sizes : 39 bits physical, 48 bits virtual power management: ```
open
2024-04-14T02:02:45Z
2024-04-16T10:03:39Z
https://github.com/numba/numba/issues/9529
[ "ParallelAccelerator", "bug - miscompile" ]
soerenwolfers
3
dynaconf/dynaconf
flask
595
[bug] SQLAlchemy URL object replaced with BoxList object
Using dynaconf with Flask and Flask-SQLAlchemy. If I initialize dynaconf, then assign a sqlalchemy `URL` object to a config key, the object becomes a `BoxList`, which causes sqlalchemy to fail later. Dynaconf should not replace arbitrary objects. ```python app = Flask(__name__) dynaconf.init_app(app) app.config["SQLALCHEMY_DATABASE_URI"] = sa_url( "postgresql", None, None, None, None, "example" ) print(type(app.config["SQLALCHEMY_DATABASE_URI"])) ``` ``` <class 'dynaconf.vendor.box.box_list.BoxList'> ``` This is a problem when using SQLAlchemy 1.4, which treats the URL as an object with attributes instead of a tuple. cc @davidism
closed
2021-06-01T19:15:32Z
2021-08-19T14:14:32Z
https://github.com/dynaconf/dynaconf/issues/595
[ "bug", "HIGH", "backport3.1.5" ]
trickardy
2
Farama-Foundation/PettingZoo
api
1,254
render_agilerl_maddpg.py run error
### Describe the bug /home/skr/miniconda3/envs/py38_2/bin/python /home/skr/PettingZoo/tutorials/AgileRL/render_agilerl_maddpg.py Traceback (most recent call last): File "/home/skr/PettingZoo/tutorials/AgileRL/render_agilerl_maddpg.py", line 118, in <module> cont_actions, discrete_action = maddpg.getAction( File "/home/skr/miniconda3/envs/py38_2/lib/python3.8/site-packages/agilerl/algorithms/maddpg.py", line 418, in getAction action_values = actor(state) File "/home/skr/miniconda3/envs/py38_2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/skr/miniconda3/envs/py38_2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) File "/home/skr/miniconda3/envs/py38_2/lib/python3.8/site-packages/agilerl/networks/evolvable_mlp.py", line 287, in forward x = self.feature_net(x) File "/home/skr/miniconda3/envs/py38_2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/skr/miniconda3/envs/py38_2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) File "/home/skr/miniconda3/envs/py38_2/lib/python3.8/site-packages/torch/nn/modules/container.py", line 219, in forward input = module(input) File "/home/skr/miniconda3/envs/py38_2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/skr/miniconda3/envs/py38_2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) File "/home/skr/miniconda3/envs/py38_2/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 117, in forward return F.linear(input, self.weight, self.bias) RuntimeError: mat1 and mat2 shapes cannot be multiplied (336x84 and 4x64) Process finished with exit code 1 ### Code example ```shell import os import imageio import numpy as np import supersuit as ss import torch from agilerl.algorithms.maddpg import MADDPG from PIL import Image, ImageDraw from pettingzoo.atari import space_invaders_v2 # Define function to return image def _label_with_episode_number(frame, episode_num): im = Image.fromarray(frame) drawer = ImageDraw.Draw(im) if np.mean(frame) < 128: text_color = (255, 255, 255) else: text_color = (0, 0, 0) drawer.text( (im.size[0] / 20, im.size[1] / 18), f"Episode: {episode_num+1}", fill=text_color ) return im if __name__ == "__main__": device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # Configure the environment env = space_invaders_v2.parallel_env(render_mode="rgb_array") channels_last = True # Needed for environments that use images as observations if channels_last: # Environment processing for image based observations env = ss.frame_skip_v0(env, 4) env = ss.clip_reward_v0(env, lower_bound=-1, upper_bound=1) env = ss.color_reduction_v0(env, mode="B") env = ss.resize_v1(env, x_size=84, y_size=84) env = ss.frame_stack_v1(env, 4) env.reset() try: state_dim = [env.observation_space(agent).n for agent in env.agents] one_hot = True except Exception: state_dim = [env.observation_space(agent).shape for agent in env.agents] one_hot = False try: action_dim = [env.action_space(agent).n for agent in env.agents] discrete_actions = True max_action = None min_action = None except Exception: action_dim = [env.action_space(agent).shape[0] for agent in env.agents] discrete_actions = False max_action = [env.action_space(agent).high for agent in env.agents] min_action = [env.action_space(agent).low for agent in env.agents] # Pre-process image dimensions for pytorch convolutional layers if channels_last: state_dim = [ (state_dim[2], state_dim[0], state_dim[1]) for state_dim in state_dim ] # Append number of agents and agent IDs to the initial hyperparameter dictionary n_agents = env.num_agents agent_ids = env.agents # Instantiate an MADDPG object maddpg = MADDPG( state_dim, action_dim, one_hot, n_agents, agent_ids, max_action, min_action, discrete_actions, device=device, ) # Load the saved algorithm into the MADDPG object # path = "./models/MADDPG/MADDPG_trained_agent.pt" # maddpg.loadCheckpoint(path) # Define test loop parameters episodes = 10 # Number of episodes to test agent on max_steps = 500 # Max number of steps to take in the environment in each episode rewards = [] # List to collect total episodic reward frames = [] # List to collect frames indi_agent_rewards = { agent_id: [] for agent_id in agent_ids } # Dictionary to collect inidivdual agent rewards # Test loop for inference for ep in range(episodes): state, info = env.reset() agent_reward = {agent_id: 0 for agent_id in agent_ids} score = 0 for _ in range(max_steps): if channels_last: state = { agent_id: np.moveaxis(np.expand_dims(s, 0), [3], [1]) for agent_id, s in state.items() } agent_mask = info["agent_mask"] if "agent_mask" in info.keys() else None env_defined_actions = ( info["env_defined_actions"] if "env_defined_actions" in info.keys() else None ) # Get next action from agent cont_actions, discrete_action = maddpg.getAction( state, epsilon=0, agent_mask=agent_mask, env_defined_actions=env_defined_actions, ) if maddpg.discrete_actions: action = discrete_action else: action = cont_actions # Save the frame for this step and append to frames list frame = env.render() frames.append(_label_with_episode_number(frame, episode_num=ep)) # Take action in environment state, reward, termination, truncation, info = env.step(action) # Save agent's reward for this step in this episode for agent_id, r in reward.items(): agent_reward[agent_id] += r # Determine total score for the episode and then append to rewards list score = sum(agent_reward.values()) # Stop episode if any agents have terminated if any(truncation.values()) or any(termination.values()): break rewards.append(score) # Record agent specific episodic reward for each agent for agent_id in agent_ids: indi_agent_rewards[agent_id].append(agent_reward[agent_id]) print("-" * 15, f"Episode: {ep}", "-" * 15) print("Episodic Reward: ", rewards[-1]) for agent_id, reward_list in indi_agent_rewards.items(): print(f"{agent_id} reward: {reward_list[-1]}") env.close() # Save the gif to specified path gif_path = "./videos/" os.makedirs(gif_path, exist_ok=True) imageio.mimwrite( os.path.join("./videos/", "space_invaders.gif"), frames, duration=10 ) ``` ### System info (py38) skr@skr-B650M-Pro-RS-WiFi:~/PettingZoo$ pip list Package Version --------------------------- ----------- accelerate 0.18.0 agilerl 0.1.19 antlr4-python3-runtime 4.9.3 appdirs 1.4.4 async-timeout 5.0.1 atari-py 0.2.9 AutoROM 0.6.1 AutoROM.accept-rom-license 0.6.1 blinker 1.8.2 cachetools 5.5.0 certifi 2024.8.30 cffi 1.17.1 cfgv 3.4.0 charset-normalizer 3.4.0 chess 1.7.0 click 8.1.7 cloudpickle 1.2.2 contourpy 1.1.1 cycler 0.12.1 dill 0.3.9 distlib 0.3.9 docker-pycreds 0.4.0 Farama-Notifications 0.0.4 fastrand 1.8.0 filelock 3.16.1 Flask 3.0.3 flatten-dict 0.4.2 fonttools 4.55.3 fsspec 2024.10.0 future 1.0.0 gitdb 4.0.11 GitPython 3.1.43 google-api-core 2.24.0 google-auth 2.37.0 google-cloud-core 2.4.1 google-cloud-storage 2.19.0 google-crc32c 1.5.0 google-resumable-media 2.7.2 googleapis-common-protos 1.66.0 gym 0.14.0 gym-notices 0.0.8 gymnasium 0.28.1 h5py 3.11.0 hanabi-learning-environment 0.0.4 huggingface-hub 0.27.0 hydra-core 1.3.2 identify 2.6.1 idna 3.10 imageio 2.35.1 importlib_metadata 8.5.0 importlib_resources 6.4.5 itsdangerous 2.2.0 jax-jumpy 1.0.0 Jinja2 3.1.4 kiwisolver 1.4.7 markdown-it-py 3.0.0 MarkupSafe 2.1.5 matplotlib 3.7.5 mdurl 0.1.2 minari 0.4.3 mpmath 1.3.0 multi-agent-ale-py 0.1.11 multiagent 0.0.1 networkx 3.1 nodeenv 1.9.1 numpy 1.24.4 nvidia-cublas-cu12 12.1.3.1 nvidia-cuda-cupti-cu12 12.1.105 nvidia-cuda-nvrtc-cu12 12.1.105 nvidia-cuda-runtime-cu12 12.1.105 nvidia-cudnn-cu12 9.1.0.70 nvidia-cufft-cu12 11.0.2.54 nvidia-curand-cu12 10.3.2.106 nvidia-cusolver-cu12 11.4.5.107 nvidia-cusparse-cu12 12.1.0.106 nvidia-nccl-cu12 2.20.5 nvidia-nvjitlink-cu12 12.6.85 nvidia-nvtx-cu12 12.1.105 omegaconf 2.3.0 opencv-python 4.10.0.84 packaging 24.2 pathtools 0.1.2 pettingzoo 1.23.1 pillow 10.4.0 pip 24.2 platformdirs 4.3.6 portion 2.6.0 pre-commit 3.5.0 proto-plus 1.25.0 protobuf 4.25.5 psutil 6.1.1 pyasn1 0.6.1 pyasn1_modules 0.4.1 pycparser 2.22 pygame 2.3.0 pyglet 1.3.2 Pygments 2.18.0 pyparsing 3.1.4 python-dateutil 2.9.0.post0 PyYAML 6.0.2 redis 4.6.0 regex 2024.11.6 requests 2.32.3 rich 13.9.4 rlcard 1.0.5 rsa 4.9 safetensors 0.4.5 scipy 1.10.1 sentry-sdk 2.19.2 setproctitle 1.3.4 setuptools 75.1.0 shellingham 1.5.4 six 1.16.0 smdv 0.1.1 smmap 5.0.1 sortedcontainers 2.4.0 SuperSuit 3.9.3 sympy 1.13.3 termcolor 1.1.0 tinyscaler 1.2.8 tokenizers 0.20.3 torch 2.4.1 torchvision 0.19.1 tqdm 4.67.1 transformers 4.46.3 triton 3.0.0 typer 0.15.1 typing_extensions 4.12.2 urllib3 2.2.3 virtualenv 20.28.0 wandb 0.13.11 websockets 13.1 Werkzeug 3.0.6 wheel 0.44.0 zipp 3.20.2 ### Additional context Python3.8 ### Checklist - [X] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/PettingZoo/issues) in the repo
open
2024-12-19T21:44:28Z
2024-12-19T21:44:28Z
https://github.com/Farama-Foundation/PettingZoo/issues/1254
[ "bug" ]
skr3178
0