repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
jupyter-incubator/sparkmagic
|
jupyter
| 738
|
[qn] Equivalent to `%run -i localscript.py` that runs a script on the cluster
|
[qn] Is there an equivalent to `%run -i localscript.py` that runs a script on the cluster?
E.g. In the local notebook, I have
```py
x=1
y=2
```
in the `localscript.py`.
And would like to run this on the cluster itself.
Currently running `%run` will run the script locally on the notebook instead of the cluster.
|
closed
|
2021-10-29T09:57:40Z
|
2021-11-12T04:37:41Z
|
https://github.com/jupyter-incubator/sparkmagic/issues/738
|
[] |
shern2
| 2
|
Yorko/mlcourse.ai
|
scikit-learn
| 624
|
A2 demo - strict inequalities in problem statement but >= and <= in solution
|
question 1.7:
>height is strictly less than 2.5%-percentile
[https://en.wikipedia.org/wiki/Inequality_(mathematics)](https://en.wikipedia.org/wiki/Inequality_(mathematics))
|
closed
|
2019-09-24T16:31:09Z
|
2019-09-29T20:56:01Z
|
https://github.com/Yorko/mlcourse.ai/issues/624
|
[] |
flinge
| 1
|
pyeve/eve
|
flask
| 580
|
retrieving sub resources returns empty
|
With a sub resource configured as below:
invoices = {
'url': 'people/<regex("[a-f0-9]{24}"):contact_id>/invoices'
POST to people/<contact_id>/invoices was successful. The document was created in Mongo collection "invoices". However, GET people/<contact_id>/invoices returns empty.
```
...
```
|
closed
|
2015-03-24T14:17:06Z
|
2015-03-27T17:36:10Z
|
https://github.com/pyeve/eve/issues/580
|
[] |
guonsoft
| 5
|
AUTOMATIC1111/stable-diffusion-webui
|
deep-learning
| 16,681
|
[Bug]: Getting and error: runtimeerror: no hip gpus are available
|
### Checklist
- [ ] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
Im on a completely fresh install of Ubuntu 22.04.2. I followed the steps on Automatic Installation on Linux. I got WebUI open but it gives me an error: runtimeerror: no hip gpus are available. What am i doing wrong? What do i need to get this thing installed and working? My system is intel 9900k, 32gb ram and a radeon 6700XT. I am new to this stuff and don't know half of the stuff im doing so please be patient with me.
### Steps to reproduce the problem
1. open terminal in the folder i want to instal WebUI
2. copied this to terminal:
sudo apt install git python3.10-venv -y
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui && cd stable-diffusion-webui
python3.10 -m venv venv
3. Then i ran it with:
./webui.sh --upcast-sampling --skip-torch-cuda-test
4. WebUI opens and when i try to generate an image it spits out:
error: runtimeerror: no hip gpus are available
### What should have happened?
It should open the WebUI and use my GPU to generate images...
### What browsers do you use to access the UI ?
Mozilla Firefox
### Sysinfo
[sysinfo-2024-11-25-14-11.json](https://github.com/user-attachments/files/17904136/sysinfo-2024-11-25-14-11.json)
### Console logs
```Shell
serwu@serwu-Z390-AORUS-MASTER:~/Desktop/Ai/stable-diffusion-webui$ ./webui.sh --upcast-sampling --skip-torch-cuda-test
################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.
################################################################
################################################################
Running on serwu user
################################################################
################################################################
Repo already cloned, using it as install directory
################################################################
################################################################
Create and activate python venv
################################################################
################################################################
Launching launch.py...
################################################################
glibc version is 2.35
Cannot locate TCMalloc. Do you have tcmalloc or google-perftool installed on your system? (improves CPU memory usage)
Python 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Launching Web UI with arguments: --upcast-sampling --skip-torch-cuda-test
/home/serwu/Desktop/Ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/timm/models/layers/__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'No HIP GPUs are available', memory monitor disabled
Loading weights [6ce0161689] from /home/serwu/Desktop/Ai/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Creating model from config: /home/serwu/Desktop/Ai/stable-diffusion-webui/configs/v1-inference.yaml
/home/serwu/Desktop/Ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py:797: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
Startup time: 4.8s (import torch: 2.3s, import gradio: 0.5s, setup paths: 0.5s, other imports: 0.2s, load scripts: 0.3s, create ui: 0.3s, gradio launch: 0.5s).
Applying attention optimization: InvokeAI... done.
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
File "/usr/lib/python3.10/threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/usr/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/initialize.py", line 149, in load_model
shared.sd_model # noqa: B018
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/shared_items.py", line 175, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/sd_models.py", line 693, in get_sd_model
load_model()
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/sd_models.py", line 868, in load_model
with devices.autocast(), torch.no_grad():
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/devices.py", line 228, in autocast
if has_xpu() or has_mps() or cuda_no_autocast():
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/devices.py", line 28, in cuda_no_autocast
device_id = get_cuda_device_id()
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/devices.py", line 40, in get_cuda_device_id
) or torch.cuda.current_device()
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 778, in current_device
_lazy_init()
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 293, in _lazy_init
torch._C._cuda_init()
RuntimeError: No HIP GPUs are available
Stable diffusion model failed to load
Using already loaded model v1-5-pruned-emaonly.safetensors [6ce0161689]: done in 0.0s
*** Error completing request
*** Arguments: ('task(y5cdfr3bjrgz0kp)', <gradio.routes.Request object at 0x7ff2024c1480>, 'woman', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/call_queue.py", line 74, in f
res = list(func(*args, **kwargs))
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/call_queue.py", line 53, in f
res = func(*args, **kwargs)
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/txt2img.py", line 109, in txt2img
processed = processing.process_images(p)
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/processing.py", line 847, in process_images
res = process_images_inner(p)
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/processing.py", line 920, in process_images_inner
with devices.autocast():
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/devices.py", line 228, in autocast
if has_xpu() or has_mps() or cuda_no_autocast():
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/devices.py", line 28, in cuda_no_autocast
device_id = get_cuda_device_id()
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/devices.py", line 40, in get_cuda_device_id
) or torch.cuda.current_device()
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 778, in current_device
_lazy_init()
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 293, in _lazy_init
torch._C._cuda_init()
RuntimeError: No HIP GPUs are available
---
```
### Additional information
_No response_
|
open
|
2024-11-25T14:13:29Z
|
2024-11-25T14:13:29Z
|
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16681
|
[
"bug-report"
] |
Bassoopioka
| 0
|
babysor/MockingBird
|
deep-learning
| 455
|
ๆฑๅฉ๏ผ้ขๅค็ๅคฑ่ดฅ๏ผ่ฏทๅธฎๅฟ็ไธไธ
|
D:\Aria2\Sound_File_Processing-master\Sound_File_Processing-master>python pre.py D:\Aria2\aidatatang_200zh -d aidatatang_200zh -n 10
python: can't open file 'D:\Aria2\Sound_File_Processing-master\Sound_File_Processing-master\pre.py': [Errno 2] No such file or directory
|
closed
|
2022-03-15T07:27:09Z
|
2022-03-15T09:15:57Z
|
https://github.com/babysor/MockingBird/issues/455
|
[] |
tsyj9850
| 0
|
IvanIsCoding/ResuLLMe
|
streamlit
| 15
|
App not working
|
closed
|
2023-06-12T05:31:03Z
|
2023-06-13T23:43:45Z
|
https://github.com/IvanIsCoding/ResuLLMe/issues/15
|
[] |
Amjad-AbuRmileh
| 3
|
|
explosion/spaCy
|
data-science
| 12,946
|
lookups in Language.vocab do not reproduce
|
## How to reproduce the behaviour:
With this function:
```
import spacy
import numpy as np
nlp = spacy.load("en_core_web_md")
def get_similar_words(aword, top_k=4):
word = nlp.vocab[str(aword)]
others = (w for w in nlp.vocab if np.count_nonzero(w.vector))
similarities = ( (w.text, w.similarity(word)) for w in others )
return sorted(similarities, key=lambda x: x[1], reverse=True)[:top_k]
```
First calls to the function reproduce:
```
# starting 'blank', results reproduce
print(get_similar_words('cat'))
print(get_similar_words('cat')) # calling twice to show reproducability
```
Results in:
```
[('cat', 1.0), ("'Cause", 0.2827487885951996), ('Ol', 0.2824869751930237), ('you', 0.27984926104545593)]
[('cat', 1.0), ("'Cause", 0.2827487885951996), ('Ol', 0.2824869751930237), ('you', 0.27984926104545593)]
```
So far, so good. But after performing some lookup in the vocab, the results become different:
```
# Outcome changes if before the call, a (related) word is looked up in the dictionary
_ = nlp.vocab['dog'] # some lookup
print(get_similar_words('cat')) # same call as before
```
Results in:
```
[('cat', 1.0), ('dog', 0.8220816850662231), ("'Cause", 0.2827487885951996), ('Ol', 0.2824869751930237)]
```
which is different from before ('dog' wasn't there). If you do another lookup on vocab (e.g. 'kitten') and repeat the call to the function,
the result also contains the last looked up word:
```
_ = nlp.vocab['kitten']
print(get_similar_words('cat'))
```
Result:
```
[('cat', 1.0), ('kitten', 0.9999999403953552), ('dog', 0.8220816850662231), ("'Cause", 0.2827487885951996)]
```
Very strange. Actually I would have expected the words that I looked up my self to be part of the initial results, but it seems those words are somehow not yet present or fully initialized with a vector until they are explicitly looked up.
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System: Ubuntu 22.04.2 LTS
* Python Version Used: 3.10.12
* spaCy Version Used: 3.5.3
* Environment Information: in Jupyter notebook
|
closed
|
2023-08-31T18:21:01Z
|
2023-09-01T16:23:33Z
|
https://github.com/explosion/spaCy/issues/12946
|
[
"feat / vectors"
] |
mpjanus
| 1
|
jina-ai/clip-as-service
|
pytorch
| 195
|
E:GRAPHOPT:[gra:opt:139]:fail to optimize the graph!
|
[**Prerequisites**]
> Please fill in by replacing `[ ]` with `[x]`.
* [โ ] Are you running the latest `bert-as-service`?
* [โ ] Did you follow [the installation](https://github.com/hanxiao/bert-as-service#install) and [the usage](https://github.com/hanxiao/bert-as-service#usage) instructions in `README.md`?
* [ โ] Did you check the [FAQ list in `README.md`](https://github.com/hanxiao/bert-as-service#speech_balloon-faq)?
* [โ ] Did you perform [a cursory search on existing issues](https://github.com/hanxiao/bert-as-service/issues)?
**System information**
> Some of this information can be collected via [this script](https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh).
- OS Platform and Distribution :Linux mobaXterm
- TensorFlow installed from (source or binary):pip install tensorflow-gpu & pip install --upgrade tensorflow-gpu
- TensorFlow version:1.12
- Python version:3.6
- `bert-as-service` version: pip install -U
- GPU model and memory:

- CPU model and memory:
---
### Description
> Please replace `YOUR_SERVER_ARGS` and `YOUR_CLIENT_ARGS` accordingly. You can also write your own description for reproducing the issue.
I'm using this command to start the server:
```bash
bert-serving-start YOUR_SERVER_ARGS
```
bert-serving-start -model_dir /tmp/chinese_L-12_H-768_A-12/ -num_worker=4

Then this issue shows up:


...
|
closed
|
2019-01-17T02:22:22Z
|
2019-01-17T04:15:20Z
|
https://github.com/jina-ai/clip-as-service/issues/195
|
[] |
HannahXu
| 10
|
coqui-ai/TTS
|
python
| 3,029
|
[Feature request]
|
<!-- Welcome to the ๐ธTTS project!
We are excited to see your interest, and appreciate your support! --->
**๐ Feature Description**
<!--A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
**Solution**
<!-- A clear and concise description of what you want to happen. -->
**Alternative Solutions**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
**Additional context**
<!-- Add any other context or screenshots about the feature request here. -->
|
closed
|
2023-10-03T18:29:37Z
|
2023-10-03T18:29:53Z
|
https://github.com/coqui-ai/TTS/issues/3029
|
[
"feature request"
] |
Parvezkhan0
| 0
|
widgetti/solara
|
jupyter
| 791
|
AppBar does not play nicely with themes
|
Hi,
I noticed that whenever `solara.AppBar()` is used in an app (or a page of an app), it somehow messes up with the theme.
[Example of of things going wrong](https://py.cafe/snippet/solara/v1#c=H4sIALjw6WYAA5VW227bOBD9FUN9SApEgiU7thNAwLaLLvq42C12H5piQUm0xVoiGZJKohb59z0zkhUnvSR1gACc-wzPHOprVJpKRpeRaq1xYeZNI5y40o-OSSOKK32lfxuPpYFWSx2udCW3sz_FTp6-viSLGX63KtQHxzfWvhWOlaSi30PI5EMtW_nB7HaNPJVaFI38T3TB5H-IxsvXDy7HEdlPFP5RzB8YnZ68N608eWp5VMUHeRdOT_6G1aw0OqClE-T91vxFFfy0knd3orWNjNPvlvPdsiaX-Ki2X06Y_XrC7DjhC6b8pjAdbJ8b82j2goC_I70oXxByMrzS0Vnk5HWnHDClgwekD1iGJvSWQD5IcBbW_qPkbXS5JaidRbJS4R0jMLoMroPE9qE2Gj62N5WqZHwzT7LzJIVzI3o0El1-jW6k84qsMiQ3JvxlEPLrIZvD6Swqa9VUTsLo46QJgI8MUN6qKtTRZXo-P4tapf8djovh9F6qXY08dFQV3LaqkW8R1UvHjSst3Q8ykGlcDLYwsYLiRtH9p_uzb6v4QYkPfhhXYvvH3i_o7vmOxgqf6-bQyPfLr8eo6fr8JTmDdBCL5rmkBzvKSn_3ZzxVIOvjJ4DAlHs6UhTgDfavZrXw9eWsmq82c5ldpMUyS7eiWBVVutik8-Vyk23O19srLXSvTJ4vk2Uyx8ntjMbKbbcqz7NFkj4RxoXSldI7D22aZMl89mrWIjsZYS55niYLdvEhmL3UZIfIKSQhODqNMQtR7kvRNHkOKLNAgm7Vtmu86eySCkqzJIO8kaKs83yVzMmsxEEGYxrEOkcuRC6lgyPVO0f0dZLBa2ggTVJsCU61cLieWBvXikZ9kS7PF3Amw0aV-zzfoKo1TqZth4qgqmTR7WxPYaCeGq1kaZwIBjEQnMLj3em8rO5a7mZNInknS7Sjd9Q_N7zFSD57oz3qbwWJM26I7o3ujypCtUtIritNSbn0Ok0paMq3U3et0Cifp8PeqtKItSA3Zfu9dFqiiBWxAzqAiIiD21tRtFEQ76RGdTTDcfpQ3HSIi364peGMsXL7Yy7bgyJ2MsCN5nUOkTdVh2GAd2j85AzDz2AwLvqCT0p_Fhm3RzXRDKxRACpfAkc-ngs6WycLyDrUKl2MCwKkKSO6OhIbh3LP6bofhPKG2JZSr45DYFtBjnwTSDdd5GN1fFgx9ge0JgO8BLHtd8zkpGOAH-mmoVA3Kfxa4faVucVIAMeHY6wC4kxNk7izXmzRB2GEHYNtTGhUESvdgA54ihwDY2Z08Qxb5UNHWgqFsy7KRnivSr6soxZJMYwPnpwVEqPRLkQA6oqAo4st7QUkQNycJdKHWPhel0QNKaZJnibIwhgAdYXxHOUY5bGv1bg8FKJrh9XJOIdBSoeXC1Na484QzgpdmRLoxxQhHeFusahIOcfGYR5W3llZorBlAihNGS1WFtyHpeYBrQmJVjUN8w-65PCNCNRWpSj6ctho6_BxFWrZ-QlWKHewh8qGmJhlryCnyS4xeOtpUWg0F8nmoQLQsjOl9AyIoZ_OSQBQjIxG1fcltTMgL6PmJhDhvnkfbd8yMvD9AK7EFtEk5skFaYaXvg7BDhHZHE-8J4of14uz8EZXIsih0gyzo14HOa1W3OCzdigDxZKqF8RVRKkc9Ut7DSUo5mjIDrQbaK2J_ZCHvmcACy5-wWvutuVisbiI0bNCeiJEwidumzQXm9VTDXI5RVyeLhCTIuBEN0Hsd0hjK89bAgfYTOV4qassODxqkykYvbPjU7AiQkLXXt3xDjBivVZ4BhjB_FAMX1x8ZGYazhM9PBF3_IAcRNhUBXqZnjOPT709TR3xRrqByDUyBAblgml15JSKIY1-4IhHoS-9ByGOTyXGM1jwoqBFFRCFnzewPuLSJwDN5Jt7RmlAb7acbwhdMMN78whKy-Ex6pwCu-FrGr5TWvAJeGYxvU_djQKn4r5ROxq60rcilHVl8II9BgbL-eODcXlMN7clf0OOeEXptxJ00xh--JfDnUMk8c6OHxHEtKgQQo8IeKIPe0l3PFgPCloM5lfIBr7FF8Oh1aFESuh9g3_XjQoSp-j-f9BjlCVTDgAA)
- In dark mode the AppBar tabs are not visible if selected (the names color coincides with the primary color probably)
- In light mode the `solara.Text()` elements are not visible even though they are not part of the NavBar component (but they are visible in dark mode).
Here is the same example, but without the NavBar component (commented out)
[Example that "works" without the NavBar component](https://py.cafe/snippet/solara/v1#c=H4sIAHnx6WYAA51WbW_bNhD-K4b6ISkQCZbs2E4AAWuLDv04bMX2oSkGSqIt1hLJkFQStch_33MnWXHatGnnAAHu_YV3z-lLVJpKRpeRaq1xYeZNI5y40o_IpBHFlb7Sv41kaSDVUocrXcnt7A-xk6cvL0ljht-L2a0K9cH0lbWvhWPxIKTfg9vkfS1b-d7sdo08lVoUjfxXdMHkv4vGy5eD0S-qH4dnG1H4KYHvKJyevDOtPDnWOor8Xt6F05O_oDErjQ6o_ASxHqs-G_WH0d_eidY2Mk6_SeHJVCb1-CifXwqU_Vqg7DjQM518VZgOej9q5ajypMbrLgSjT0_eNKrcz_AoZ2h6Y1x-Yp1qheuP7Z5M4A1SFeUzKUxK_z-J6Cxy8rpTDjOpg8cWHdYHktBb2quBA1pY-7eSt9Hllkb1LJKVCm95gqPL4DpwbB9qo2Fje1OpSsY38yQ7T1IYN6JHw6LLL9GNdF6RVobgxoQ_DVx-OURzoM6islZN5SSUPkySgHGUAcJbVYU6ukzP52dRq_Q_A7kYqHdS7WrEIVJVMNuqRr6GVy8dN0xp6b4TgVTjYtCFihXkN4ruP96ffZvFd1J8sEO7Ets_tv6J6p6vaMzwuWoOhTydfj16TdfnPxMzSAe2aJ4LetCjqPR3f8ZdxWR9-IghMOWeSPKCeYP-i1ktfH05q-arzVxmF2mxzNKtKFZFlS426Xy53GSb8_X2SgvdK5Pny2SZzEG5ndFY6e1W5Xm2SNKvmHGhdKX0zkOaJlkyB263iE5K6Euep8mCTTy2ZC816cFzCk4IjqjRZyHKfSmaJs8xysyQgGu17RpvOrukhNIsycBvpCjrPF8lc1IrQchgTANf54gFz6V0MKR85_C-TjJYDQWkSYotAVULh-eJtXGtaNRn6fJ8AWNSpDXO8w2yWoMybTtkBFEli25ne3ID8VRoJUvjRMDGIwFE4FPXeVndtVzNmljyTpYoR--ofi54i5Z88kZ75N8KYmdcEL0bvR9lhGyX4FxXmoJy6nWaktOUX6fuWqGRPneHrVWl4WtBZsr2e-m0RBIrQgdUABYBB5e3Im8jI95Jjeyoh2P3Ibjp4Bf1cEkDjbZy-WMs2wMidjLAjPp1DpY3VYdmAHeo_WQMxU9AME76gimlP4mMy6OcqAfWKAwqPwJ7Pu4LKlsnC_A65CpdjAfCSFNEVHXENg7pntNzPzDlDaEthV4du8C2Ahz5JRBuesjH4viwYmyP0ZoUcD1i2-8YyUnGA34km5pC1aSwwxXYV-YWLcE4PpCxCvAzFU3sznqxRR00I2wYbGNCo4pY6QZwwF1kH2gzTxf3sFU-dCQlV6B1UTbCe1XyYx2VSIKhfbDkqOAYjXLBwqCuaHB0saW9AAcTN2eO9CEWvtclQUOKbpKlCbIwBoO6QnuOYoz82NdqXB5y0bXD6mQcwyCkw-VCl9Z4M7izQlemxPSji-CO426xqAg5x8ahH1beWVkisWWCUZoiWqwssA9LzQ1a0yRa1TSMP6iS3TciUFmVIu_LYaOtw4daqGXnp7FCuoM-RDbEhCx7BT51donGW0-LQq25SDYPGQCWnSml54EY6umcxACKEdEo-76kcobJy6i4aYjw3ryPtm95MvDdAazEFlEn5skFSYZLX4dgB4-sjhPvCeLH9eIovNGVCHLINEPvqNaBT6sVN_gsHtJAsiTqBWEVQSp7_dxeQwiIOWqyA-ziQ2dAP8Sh7xmMBSe_4DV323KxWFzEqFkhPAEizSdemyQXm9XXEsRyirA8XcAneQBFL0HodwhjK89bAgPoTOl4qassOBy1SRWI3tnxFKwIkFC1V3e8AzyxXiucAZ5gPhTDFxeTjEwDPcHDV-yOD8iBhU1VgJfpnHl8Iu6p6_A3wg1YrpEh8FAuGFZHTKl4pFEPDHEU-tJ7AOJ4KtGeQYMXBSWqAC983oD68EufANSTb94ZqWF6s-V8Q9MFNdybR6O0HI5R5xTQDV_rsJ3CAk-AM4vpPnU3CpiK90buKOhK34pQ1pXBBXs8GMznjw-ey2O4uS35G3KcV6R-KwE3-DimqVkObw6WxJ0dPyIIaZEhmB4ecKIPe0lvPGgPAloMxlfwBrzFF8Oh1CFFCuh9g3_XjQoSVHT_HzXf88fGDgAA)
[Finally, a kind of patch around this is to use `solara.v.AppBar()` instead](https://py.cafe/snippet/solara/v1#c=H4sIAHHz6WYAA6VWbW_bNhD-K4b2ISkQCZbs2E4AAWuLDv04bMX2oSkGWqIt1hLJkFQctch_33MnWXHSZEkxBwjAe33uePdQ36PClDK6jFRjjQsTb2rhxJV-cExqsb7SV_rX4VgYaLXU4UqXcjP5XWzl6ZtLspjg98tkr0J1cH1r7TvhWE3KY9XNYyX97jMmnyrZyE9mu63lqdRiXct_RBtM_puovXxz73Ick_3E2j-I-YzR6clH08iTx5ZHKD7J23B68iesJoXRARWfIO-P5q9C8J9IPtyKxtYyTp-E8ySs0SU-wvbTCbOfT5gdJ3xFl9-uTQvbl9o8mD1r9a4NwejTk_e1KnYTXNwZLqU2Lj-xTjXCdY99nwTzHtBF8Qo4o-H_AxSdRU5et8phlnXw2LTDikETOku710twFtb-peQ-utzQiJ9FslThA09-dBlcC4ntQmU0fGxnSlXK-GaaZOdJCudadGhgdPk9upHOK7LKkNyY8IdByO-HbA6ns6ioVF06CaPPoyZgbGWAcq_KUEWX6fn0LGqU_rs_zvrTR6m2FfLQUZVw26havkNULx03TWnpnslApvG6t4WJFRQ3iu6-3J39iOIZiPd-aFdiu4fer6ju5YoGhC9VcyjkafjVEDVdnr8mZ5AOYlG_lPRgR1np7-6Mu4rJ-vwFQ2CKHR0pCuYN9r9MKuGry0k5XaymMrtI1_Ms3Yj1Yl2ms1U6nc9X2ep8ubnSQnfK5Pk8mSdTnNzWaKz6ZqPyPJsl6SNhvFa6VHrroU2TLJmC-BtkJyP0Jc_TZMYuHluyk5rsEDmFJARHpyHmWhS7QtR1nmOUWSBB82rT1t60dk6A0izJIK-lKKo8XyRTMitwkMGYGrHOkQuRC-ngSHiniL5MMnj1BaRJii3BqRIO1xNr4xpRq2_S5fkMzmRIa5znK6Ba4mSapkcEVSnX7dZ2FAbqsdBSFsaJgI0HAGTg57D1srxtuJolieStLFCO3lL9XPAGLfnqjfbA3wgSZ1wQ3RvdHyEC2jkk16WmpAy9SlMKmvLtVG0jNOBzd9hblRqxZuSmbLeTTkuAWBA7oAKIiDi4vAVFGwTxVmqgox4O3YfipkVc1MMl9We0lcsfctkOFLGVAW7Ur3OIvClbNAO8Q-0nZxh-BYMx6As-Kf1VZFweYaIeWKMwqHwJHPm4L6hsmcwga4FVuhgXhJGmjKjqSGwc4J7Tdd8L5Q2xLaVeHIfAtoIc-SaQbrzIh-r4sGLsj9EaDfCKxLbbMpOTjgf8SDc2hapJ4YdXYFeaPVqCcbw_xiogzlg0iVvrxQZ10IywY7C1CbVax0rXoAPuIsdAm3m6uIeN8qElLYXCWa-LWnivCr6soxJJ0bcPnpwVEqNRLkQY1AUNjl5vaC8gwcRNWSJ9iIXvdEHUkKKb5GmCXBuDQV2gPUc5BnnsKzUsD4Vom351Ms5hkNLh5UKXlrgzhLNCl6bA9KOLkA7jbrGoSDnFxqEfVt5aWQDYPMEojRktVhbch6XmBi1pEq2qa-YfVMnhaxGorFJR9Hm_0dbhoy5UsvXjWAFubw-VDTExy05BTp2do_HW06JQay6S1T0C0LIzhfQ8EH09rZMYQDEwGqHvCiqnn7yMihuHCPfN-2i7hicD3x7gSmwRdWKaXJCmf-mrEGwfkc3xxHui-GG9OAtvdCmC7JFm6B3V2stpteIan9M9DIAlVSeIq4hSOeq35hpKUMxRkx1oFx86PfshD33PYCwY_IzX3G2K2Wx2EaNmhfREiDSfuG3SXKwWjzXI5RRxeTpDTIqAE90Esd8hjS09bwkcYDPC8VKXWXB41EZTMHprh6dgQYSEqr265R3gifVa4RngCeaHov_i4iMzU38e6eGRuOUH5CDCpirQy_iceXwm7qjriDfQDUSuliHwUM6YVgdOKXmkUQ8c8Sh0hfcgxOGpRHt6C14UlKgCovDzBtZHXPoEoJ78cM-AhunN5tMVTRfM8N48GKV5_xi1ToHd8BUP3zEt-AQ8Mxvfp_ZGgVNx38COgq70XoSiKg1esIeDwXL--OC5PKabfcHfkMO8Avpegm7wcUxTM-_vHCKJd3b4iCCmBUIIPSLgiT7sJd1xb90raDGYXyHr-RZfDIdSe4iU0Psa_65rFSRO0d2_3WTTx-oOAAA)
However the problem here is that the `theme.js` in [assets](https://solara.dev/documentation/advanced/reference/asset-files) is not respected (I think).
Any advice on this, or is this perhaps a known issue?
Many thanks!
|
open
|
2024-09-17T21:28:30Z
|
2024-09-18T15:05:12Z
|
https://github.com/widgetti/solara/issues/791
|
[] |
JovanVeljanoski
| 2
|
quantmind/pulsar
|
asyncio
| 311
|
passing a TemporaryFile to client.HttpRequest fails
|
* **pulsar version**:
2.0.2
* **python version**:
3.6.5
* **platform**:
Linux/Ubuntu16.04
## Description
Appending a TemporaryFile to a post request fails with this trace:
File "/opt/venv/dc-venv/lib/python3.6/site-packages/pulsar/apps/http/client.py", line 919, in _request
request = HttpRequest(self, url, method, params, **nparams)
File "/opt/venv/dc-venv/lib/python3.6/site-packages/pulsar/apps/http/client.py", line 247, in __init__
self.body = self._encode_body(data, files, json)
File "/opt/venv/dc-venv/lib/python3.6/site-packages/pulsar/apps/http/client.py", line 369, in _encode_body
body, ct = self._encode_files(data, files)
File "/opt/venv/dc-venv/lib/python3.6/site-packages/pulsar/apps/http/client.py", line 407, in _encode_files
fn = guess_filename(v) or k
File "/opt/venv/dc-venv/lib/python3.6/site-packages/pulsar/apps/http/client.py", line 64, in guess_filename
if name and name[0] != '<' and name[-1] != '>':
TypeError: 'int' object is not subscriptable
The problem here seems to be that python's tempfile.TemporaryFile's filename has type int and not string.
A look at the documentation of io.FileIO:
> The name can be one of two things:
>
> a character string or bytes object representing the path to the file which will be opened. In this case closefd must be True (the default) otherwise an error will be raised.
> an integer representing the number of an existing OS-level file descriptor to which the resulting FileIO object will give access. When the FileIO object is closed this fd will be closed as well, unless closefd is set to False.
As a user of pulsar the issue can be prevented by using NamedTemporaryFile instead of TemporaryFile.
## Expected behaviour
A TemporaryFile (which has no name) is a file like object and therefore it should be possible to pass it via the 'file' parameter to a request.
(It is possible with with the 'requests' lib)
<!-- What is the behaviour you expect? -->
## Actual behaviour
guess_filename fails during encoding the file because the filename is an int
<!-- What's actually happening? -->
## Steps to reproduce
```
from pulsar.apps import http
from tempfile import TemporaryFile
import asyncio
async def request():
with TemporaryFile() as tmpFile:
tmpFile.write('hello world'.encode())
tmpFile.seek(0)
sessions = http.HttpClient()
await sessions.post(
'http://google.com',
files={'file.txt': tmpFile}
)
asyncio.get_event_loop().run_until_complete(request())
```
|
open
|
2018-08-09T16:38:11Z
|
2018-08-09T16:42:12Z
|
https://github.com/quantmind/pulsar/issues/311
|
[] |
DavHau
| 0
|
taverntesting/tavern
|
pytest
| 404
|
Need support for handling images(jpg/png) in request's data section for REST API
|
Currently Tavern is not supporting if we place the image in the data section of the request. As of now it accepts only "json","yaml","yml" as per source code referenced. Could you please let me know how to do it.
**pyets.ini:**
I have included this image in .ini file because directly not able to reference the image placed.
`[pytest]
tavern-global-cfg=
metro.jpg`
**Sample Test:**
```
test_name: Verify POST HPS_Insurance_Card API with 200 OK Status and its response parameters
stages:
- name: Upload file and data
request:
url: "http://localhost:5000/card/extract/from_image"
method: POST
data:
card_image: "metro.jpg"
subscriber_name: "MARY SAMPLE"
response:
status_code: 200
```
While executing this code,it throws the below mentioned exception.
```
C:\Venkatesh\TestingMetrics&TestCases\InsuranceCardAPITesting\Aug12>pytest -v
============================= test session starts =============================
platform win32 -- Python 2.7.16, pytest-4.3.1, py-1.8.0, pluggy-0.9.0 -- c:\python27\python.exe
cachedir: .pytest_cache
metadata: {'Python': '2.7.16', 'Platform': 'Windows-10-10.0.16299', 'Packages': {'py': '1.8.0', 'pytest': '4.3.1', 'pluggy': '0.9.0'}, 'JAVA_HOME': 'C:\\Program Files\\Java\\jdk-11.0.1', 'Plugins': {'tavern': '0.26.2', 'html': '1.20.0', 'metadata': '1.8.0'}}
rootdir: C:\Venkatesh\TestingMetrics&TestCases\InsuranceCardAPITesting\Aug12, inifile: pytest.ini
plugins: tavern-0.26.2, metadata-1.8.0, html-1.20.0
collected 1 item
test_hps_analytic_cloud_api_knowledge_base_concepts.tavern.yaml::Verify POST HPS_Insurance_Card API with 200 OK Status and its response parameters FAILED [100%]
================================== FAILURES ===================================
_ C:\Venkatesh\TestingMetrics&TestCases\InsuranceCardAPITesting\Aug12\test_hps_analytic_cloud_api_knowledge_base_concepts.tavern.yaml::Verify POST HPS_Insurance_Card API with 200 OK Status and its response parameters _
cls = <class '_pytest.runner.CallInfo'>
func = <function <lambda> at 0x0000000007AB00B8>, when = 'call'
reraise = (<class '_pytest.outcomes.Exit'>, <type 'exceptions.KeyboardInterrupt'>)
@classmethod
def from_call(cls, func, when, reraise=None):
#: context of invocation: one of "setup", "call",
#: "teardown", "memocollect"
start = time()
excinfo = None
try:
> result = func()
c:\python27\lib\site-packages\_pytest\runner.py:226:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> lambda: ihook(item=item, **kwds), when=when, reraise=reraise
)
c:\python27\lib\site-packages\_pytest\runner.py:198:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <_HookCaller 'pytest_runtest_call'>, args = ()
kwargs = {'item': <YamlItem Verify POST HPS_Insurance_Card API with 200 OK Status and its response parameters>}
notincall = set([])
def __call__(self, *args, **kwargs):
if args:
raise TypeError("hook calling supports only keyword arguments")
assert not self.is_historic()
if self.spec and self.spec.argnames:
notincall = (
set(self.spec.argnames) - set(["__multicall__"]) - set(kwargs.keys())
)
if notincall:
warnings.warn(
"Argument(s) {} which are declared in the hookspec "
"can not be found in this hook call".format(tuple(notincall)),
stacklevel=2,
)
> return self._hookexec(self, self.get_hookimpls(), kwargs)
c:\python27\lib\site-packages\pluggy\hooks.py:289:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <_pytest.config.PytestPluginManager object at 0x00000000059C4B38>
hook = <_HookCaller 'pytest_runtest_call'>
methods = [<HookImpl plugin_name='runner', plugin=<module '_pytest.runner' from 'c:\python27\lib\site-packages\_pytest\runner.py...9EAC88>>, <HookImpl plugin_name='logging-plugin', plugin=<_pytest.logging.LoggingPlugin object at 0x0000000007A84518>>]
kwargs = {'item': <YamlItem Verify POST HPS_Insurance_Card API with 200 OK Status and its response parameters>}
def _hookexec(self, hook, methods, kwargs):
# called from all hookcaller instances.
# enable_tracing will set its own wrapping function at self._inner_hookexec
> return self._inner_hookexec(hook, methods, kwargs)
c:\python27\lib\site-packages\pluggy\manager.py:68:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hook = <_HookCaller 'pytest_runtest_call'>
methods = [<HookImpl plugin_name='runner', plugin=<module '_pytest.runner' from 'c:\python27\lib\site-packages\_pytest\runner.py...9EAC88>>, <HookImpl plugin_name='logging-plugin', plugin=<_pytest.logging.LoggingPlugin object at 0x0000000007A84518>>]
kwargs = {'item': <YamlItem Verify POST HPS_Insurance_Card API with 200 OK Status and its response parameters>}
self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
methods,
kwargs,
> firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
)
c:\python27\lib\site-packages\pluggy\manager.py:62:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hook_impls = [<HookImpl plugin_name='runner', plugin=<module '_pytest.runner' from 'c:\python27\lib\site-packages\_pytest\runner.py...9EAC88>>, <HookImpl plugin_name='logging-plugin', plugin=<_pytest.logging.LoggingPlugin object at 0x0000000007A84518>>]
caller_kwargs = {'item': <YamlItem Verify POST HPS_Insurance_Card API with 200 OK Status and its response parameters>}
firstresult = False
def _multicall(hook_impls, caller_kwargs, firstresult=False):
"""Execute a call into multiple python functions/methods and return the
result(s).
``caller_kwargs`` comes from _HookCaller.__call__().
"""
__tracebackhide__ = True
results = []
excinfo = None
try: # run impl and wrapper setup functions in a loop
teardowns = []
try:
for hook_impl in reversed(hook_impls):
try:
args = [caller_kwargs[argname] for argname in hook_impl.argnames]
except KeyError:
for argname in hook_impl.argnames:
if argname not in caller_kwargs:
raise HookCallError(
"hook call must provide argument %r" % (argname,)
)
if hook_impl.hookwrapper:
try:
gen = hook_impl.function(*args)
next(gen) # first yield
teardowns.append(gen)
except StopIteration:
_raise_wrapfail(gen, "did not yield")
else:
res = hook_impl.function(*args)
if res is not None:
results.append(res)
if firstresult: # halt further impl calls
break
except BaseException:
excinfo = sys.exc_info()
finally:
if firstresult: # first result hooks return a single value
outcome = _Result(results[0] if results else None, excinfo)
else:
outcome = _Result(results, excinfo)
# run all wrapper post-yield blocks
for gen in reversed(teardowns):
try:
gen.send(outcome)
_raise_wrapfail(gen, "has second yield")
except StopIteration:
pass
> return outcome.get_result()
c:\python27\lib\site-packages\pluggy\callers.py:208:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <pluggy.callers._Result object at 0x0000000007AB3EF0>
def get_result(self):
"""Get the result(s) for this hook call.
If the hook was marked as a ``firstresult`` only a single value
will be returned otherwise a list of results.
"""
__tracebackhide__ = True
if self._excinfo is None:
return self._result
else:
ex = self._excinfo
if _py3:
raise ex[1].with_traceback(ex[2])
> _reraise(*ex) # noqa
c:\python27\lib\site-packages\pluggy\callers.py:81:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hook_impls = [<HookImpl plugin_name='runner', plugin=<module '_pytest.runner' from 'c:\python27\lib\site-packages\_pytest\runner.py...9EAC88>>, <HookImpl plugin_name='logging-plugin', plugin=<_pytest.logging.LoggingPlugin object at 0x0000000007A84518>>]
caller_kwargs = {'item': <YamlItem Verify POST HPS_Insurance_Card API with 200 OK Status and its response parameters>}
firstresult = False
def _multicall(hook_impls, caller_kwargs, firstresult=False):
"""Execute a call into multiple python functions/methods and return the
result(s).
``caller_kwargs`` comes from _HookCaller.__call__().
"""
__tracebackhide__ = True
results = []
excinfo = None
try: # run impl and wrapper setup functions in a loop
teardowns = []
try:
for hook_impl in reversed(hook_impls):
try:
args = [caller_kwargs[argname] for argname in hook_impl.argnames]
except KeyError:
for argname in hook_impl.argnames:
if argname not in caller_kwargs:
raise HookCallError(
"hook call must provide argument %r" % (argname,)
)
if hook_impl.hookwrapper:
try:
gen = hook_impl.function(*args)
next(gen) # first yield
teardowns.append(gen)
except StopIteration:
_raise_wrapfail(gen, "did not yield")
else:
> res = hook_impl.function(*args)
c:\python27\lib\site-packages\pluggy\callers.py:187:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
item = <YamlItem Verify POST HPS_Insurance_Card API with 200 OK Status and its response parameters>
def pytest_runtest_call(item):
_update_current_test_var(item, "call")
sys.last_type, sys.last_value, sys.last_traceback = (None, None, None)
try:
> item.runtest()
c:\python27\lib\site-packages\_pytest\runner.py:123:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <YamlItem Verify POST HPS_Insurance_Card API with 200 OK Status and its response parameters>
def runtest(self):
> self.global_cfg = load_global_cfg(self.config)
c:\python27\lib\site-packages\tavern\testutils\pytesthook\item.py:136:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (<_pytest.config.Config object at 0x0000000006B3F390>,), kwds = {}
key = (<_pytest.config.Config object at 0x0000000006B3F390>,), link = None
def wrapper(*args, **kwds):
# size limited caching that tracks accesses by recency
key = make_key(args, kwds, typed) if kwds or typed else args
with lock:
link = cache_get(key)
if link is not None:
# record recent use of the key by moving it to the front of the list
root, = nonlocal_root
link_prev, link_next, key, result = link
link_prev[NEXT] = link_next
link_next[PREV] = link_prev
last = root[PREV]
last[NEXT] = root[PREV] = link
link[PREV] = last
link[NEXT] = root
stats[HITS] += 1
return result
> result = user_function(*args, **kwds)
c:\python27\lib\site-packages\backports\functools_lru_cache.py:137:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
pytest_config = <_pytest.config.Config object at 0x0000000006B3F390>
@lru_cache()
def load_global_cfg(pytest_config):
"""Load globally included config files from cmdline/cfg file arguments
Args:
pytest_config (pytest.Config): Pytest config object
Returns:
dict: variables/stages/etc from global config files
Raises:
exceptions.UnexpectedKeysError: Invalid settings in one or more config
files detected
"""
# Load ini first
ini_global_cfg_paths = pytest_config.getini("tavern-global-cfg") or []
# THEN load command line, to allow overwriting of values
cmdline_global_cfg_paths = pytest_config.getoption("tavern_global_cfg") or []
all_paths = ini_global_cfg_paths + cmdline_global_cfg_paths
> global_cfg = load_global_config(all_paths)
c:\python27\lib\site-packages\tavern\testutils\pytesthook\util.py:73:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
global_cfg_paths = ['metroplus.jpg']
def load_global_config(global_cfg_paths):
"""Given a list of file paths to global config files, load each of them and
return the joined dictionary.
This does a deep dict merge.
Args:
global_cfg_paths (list(str)): List of filenames to load from
Returns:
dict: joined global configs
"""
global_cfg = {}
if global_cfg_paths:
logger.debug("Loading global config from %s", global_cfg_paths)
for filename in global_cfg_paths:
with open(filename, "r") as gfileobj:
> contents = yaml.load(gfileobj, Loader=IncludeLoader)
c:\python27\lib\site-packages\tavern\util\general.py:28:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
stream = <closed file 'metroplus.jpg', mode 'r' at 0x000000000799C8A0>
Loader = <class 'tavern.util.loader.IncludeLoader'>
def load(stream, Loader=None):
"""
Parse the first YAML document in a stream
and produce the corresponding Python object.
"""
if Loader is None:
load_warning('load')
Loader = FullLoader
> loader = Loader(stream)
c:\python27\lib\site-packages\yaml\__init__.py:112:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <tavern.util.loader.IncludeLoader object at 0x0000000007AB3F60>
stream = <closed file 'metroplus.jpg', mode 'r' at 0x000000000799C8A0>
def __init__(self, stream):
"""Initialise Loader."""
try:
self._root = os.path.split(stream.name)[0]
except AttributeError:
self._root = os.path.curdir
> Reader.__init__(self, stream)
c:\python27\lib\site-packages\tavern\util\loader.py:116:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <tavern.util.loader.IncludeLoader object at 0x0000000007AB3F60>
stream = <closed file 'metroplus.jpg', mode 'r' at 0x000000000799C8A0>
def __init__(self, stream):
self.name = None
self.stream = None
self.stream_pointer = 0
self.eof = True
self.buffer = u''
self.pointer = 0
self.raw_buffer = None
self.raw_decode = None
self.encoding = None
self.index = 0
self.line = 0
self.column = 0
if isinstance(stream, unicode):
self.name = "<unicode string>"
self.check_printable(stream)
self.buffer = stream+u'\0'
elif isinstance(stream, str):
self.name = "<string>"
self.raw_buffer = stream
self.determine_encoding()
else:
self.stream = stream
self.name = getattr(stream, 'name', "<file>")
self.eof = False
self.raw_buffer = ''
> self.determine_encoding()
c:\python27\lib\site-packages\yaml\reader.py:87:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <tavern.util.loader.IncludeLoader object at 0x0000000007AB3F60>
def determine_encoding(self):
while not self.eof and len(self.raw_buffer) < 2:
self.update_raw()
if not isinstance(self.raw_buffer, unicode):
if self.raw_buffer.startswith(codecs.BOM_UTF16_LE):
self.raw_decode = codecs.utf_16_le_decode
self.encoding = 'utf-16-le'
elif self.raw_buffer.startswith(codecs.BOM_UTF16_BE):
self.raw_decode = codecs.utf_16_be_decode
self.encoding = 'utf-16-be'
else:
self.raw_decode = codecs.utf_8_decode
self.encoding = 'utf-8'
> self.update(1)
c:\python27\lib\site-packages\yaml\reader.py:137:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <tavern.util.loader.IncludeLoader object at 0x0000000007AB3F60>
length = 1
def update(self, length):
if self.raw_buffer is None:
return
self.buffer = self.buffer[self.pointer:]
self.pointer = 0
while len(self.buffer) < length:
if not self.eof:
self.update_raw()
if self.raw_decode is not None:
try:
data, converted = self.raw_decode(self.raw_buffer,
'strict', self.eof)
except UnicodeDecodeError, exc:
character = exc.object[exc.start]
if self.stream is not None:
position = self.stream_pointer-len(self.raw_buffer)+exc.start
else:
position = exc.start
raise ReaderError(self.name, position, character,
> exc.encoding, exc.reason)
E ReaderError: 'utf8' codec can't decode byte #xff: invalid start byte
E in "metroplus.jpg", position 0
c:\python27\lib\site-packages\yaml\reader.py:170: ReaderError
============================== warnings summary ===============================
c:\python27\lib\site-packages\tavern\testutils\pytesthook\item.py:47
c:\python27\lib\site-packages\tavern\testutils\pytesthook\item.py:47: FutureWarning: Tavern will drop support for Python 2 in a future release, please switch to using Python 3 (see https://docs.pytest.org/en/latest/py27-py34-deprecation.html)
FutureWarning,
-- Docs: https://docs.pytest.org/en/latest/warnings.html
==================== 1 failed, 1 warnings in 1.95 seconds =====================
```
|
closed
|
2019-08-12T11:33:53Z
|
2019-08-30T14:58:40Z
|
https://github.com/taverntesting/tavern/issues/404
|
[] |
kvenkat88
| 2
|
youfou/wxpy
|
api
| 442
|
ๆ ๆณๆฅๅๅฐ็พคๆถๆฏ
|
ๆ็ปๅฝๅพฎไฟกๆบๅจไบบๅๅ็ฐๅฏไปฅๆฅๅๅฐๅฅฝๅๅๅ
ฌไผๅท็ๆถๆฏ๏ผไฝๆ ๆณๆฅๅ็พคๆถๆฏ๏ผไฝๆฏๆๅจ็ฝ้กต็ปๅฝๅพฎไฟกweb็๏ผๅ็ฐๅฏไปฅๆฅๅๅฐ็พคๆถๆฏ๏ผๆไปฅๆณ่ฏท้ฎไธๆฏไธๆฏAPIๅบ็ฐไบ้ฎ้ข
|
open
|
2020-02-26T04:14:27Z
|
2020-02-26T04:14:27Z
|
https://github.com/youfou/wxpy/issues/442
|
[] |
ROC-D
| 0
|
chatanywhere/GPT_API_free
|
api
| 131
|
[Enhancement] ่ฏทๆฏๆๅ
่ดนKEYๅฏนๆฐ็gpt-3.5-turbo (gpt-3.5-turbo-1106)็่ฐ็จ
|
่ฏทๆฏๆๅ
่ดนKeyๅฏนๆฐ็gpt-3.5-turbo (gpt-3.5-turbo-1106)็่ฐ็จ
|
closed
|
2023-11-09T07:53:53Z
|
2023-11-09T08:47:06Z
|
https://github.com/chatanywhere/GPT_API_free/issues/131
|
[] |
kiritoko1029
| 0
|
huggingface/datasets
|
deep-learning
| 6,481
|
using torchrun, save_to_disk suddenly shows SIGTERM
|
### Describe the bug
When I run my code using the "torchrun" command, when the code reaches the "save_to_disk" part, suddenly I get the following warning and error messages:
Because the dataset is too large, the "save_to_disk" function splits it into 70 parts for saving. However, an error occurs suddenly when it reaches the 14th shard.
WARNING: torch.distributed.elastic.multiprocessing.api: Sending process 2224968 closing signal SIGTERM
ERROR: torch.distributed.elastic.multiprocessing.api: failed (exitcode: -7). traceback: Signal 7 (SIGBUS) received by PID 2224967.
### Steps to reproduce the bug
ds_shard = ds_shard.map(map_fn, *args, **kwargs)
ds_shard.save_to_disk(ds_shard_filepaths[rank])
Saving the dataset (14/70 shards): 20%|โโ | 875350/4376702 [00:19<01:53, 30863.15 examples/s]
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2224968 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -7) local_rank: 0 (pid: 2224967) of binary: /home/bingxing2/home/scx6964/.conda/envs/ariya235/bin/python
Traceback (most recent call last):
File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
return f(*args, **kwargs)
File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/run.py", line 794, in main
run(args)
File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/run.py", line 785, in run
elastic_launch(
File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
==========================================================
run.py FAILED
----------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
----------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2023-12-08_20:09:04
rank : 0 (local_rank: 0)
exitcode : -7 (pid: 2224967)
error_file: <N/A>
traceback : Signal 7 (SIGBUS) received by PID 2224967
### Expected behavior
I hope it can save successfully without any issues, but it seems there is a problem.
### Environment info
`datasets` version: 2.14.6
- Platform: Linux-4.19.90-24.4.v2101.ky10.aarch64-aarch64-with-glibc2.28
- Python version: 3.10.11
- Huggingface_hub version: 0.17.3
- PyArrow version: 14.0.0
- Pandas version: 2.1.2
|
open
|
2023-12-08T13:22:03Z
|
2023-12-08T13:22:03Z
|
https://github.com/huggingface/datasets/issues/6481
|
[] |
Ariya12138
| 0
|
GibbsConsulting/django-plotly-dash
|
plotly
| 205
|
Handling dash.exceptions.PreventUpdate
|
Dash allows the use of this special exception, `dash.exceptions.PreventUpdate`, when no components need updating, even though a callback is triggered.
Because callbacks are triggered on page load, I use this `PreventUpdate` feature quite a bit.
They are raised during this line in `dash_wrapper.py`:
```
res = self.callback_map[target_id]['callback'](*args, **argMap)
```
and never handled. Because they aren't handled, I get a 500, traceback and a bad console log, even though everything is pretty much behaving as expected. It would be nice if you could catch this specific error and return without raising anything, as whenever this is used, it is meant to be there.
I had a look at your code, but I'm not sure where exactly I'd best catch the error, and what I'd return to basically say 'do nothing here, but everything is OK'.
Complete traceback example:
```
Internal Server Error: /resources/app/buzzword/_dash-update-component
Traceback (most recent call last):
File "/home/danny/venv/py3.7/lib/python3.7/site-packages/django/core/handlers/exception.py", line 34, in inner
response = get_response(request)
File "/home/danny/venv/py3.7/lib/python3.7/site-packages/django/core/handlers/base.py", line 115, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/home/danny/venv/py3.7/lib/python3.7/site-packages/django/core/handlers/base.py", line 113, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/danny/venv/py3.7/lib/python3.7/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/home/danny/venv/py3.7/lib/python3.7/site-packages/django_plotly_dash/views.py", line 93, in update
resp = app.dispatch_with_args(request_body, arg_map)
File "/home/danny/venv/py3.7/lib/python3.7/site-packages/django_plotly_dash/dash_wrapper.py", line 562, in dispatch_with_args
res = self.callback_map[target_id]['callback'](*args, **argMap)
File "/home/danny/venv/py3.7/lib/python3.7/site-packages/dash/dash.py", line 1366, in add_context
raise exceptions.PreventUpdate
dash.exceptions.PreventUpdate
[03/Dec/2019 14:59:44] "POST /resources/app/buzzword/_dash-update-component HTTP/1.1" 500 136760
```
|
closed
|
2019-12-03T15:26:59Z
|
2019-12-04T16:15:01Z
|
https://github.com/GibbsConsulting/django-plotly-dash/issues/205
|
[] |
interrogator
| 3
|
christabor/flask_jsondash
|
flask
| 85
|
Clear api results field on modal popup
|
Just to prevent any confusion as to what is loaded there.
|
closed
|
2017-03-01T19:04:52Z
|
2017-03-02T20:28:18Z
|
https://github.com/christabor/flask_jsondash/issues/85
|
[
"enhancement"
] |
christabor
| 0
|
tensorlayer/TensorLayer
|
tensorflow
| 643
|
Tensorboard No scalar data was found and No graph definition files were found
|
### Issue Description
I can't generate graph or scalar data in Tensorboard.
I have create a code for CNN, but I can't understand why can't generate graph or scalar data.
And other error is:
InvalidArgumentError: You must feed a value for placeholder tensor 'y' with dtype float and shape [?,10]
[[Node: y = Placeholderdtype=DT_FLOAT, shape=[?,10], _device="/job:localhost/replica:0/task:0/device:CPU:0"]]
When I take the expression to create the table of contents the error on the placeholder disappears, but in this way I can not create any data for the Tensorboard. I can not find the error in the expression of y for the placeholder.
Can someone help me? Thanks in advance.
### Reproducible Code
```python
import numpy as np
import matplotlib.pyplot as plt
import h5py
import os
import tensorflow as tf
from scipy.io import loadmat
from skimage import color
from skimage import io
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder
#Define o limite para quais mensagens serรฃo registradas.
tf.logging.set_verbosity(tf.logging.INFO)
#%matplotlib inline
#Definir o tamanho default da imagem
plt.rcParams['figure.figsize'] = (16.0, 4.0)
"""Criando o modelo CNN para o conjunto das imagens do SVHN"""
TENSORBOARD_SUMMARIES_DIR = '/ArqAna/svhn_classifier_logs'
"""Carregando os dados"""
#Abrindo o arquivo
anadate = h5py.File('SVHN_dados.h5', 'r')
#Carregando o conjunto de treinamento, teste e validaรงรฃo
X_treino = anadate['X_treino'][:]
y_treino = anadate['y_treino'][:]
X_teste = anadate['X_teste'][:]
y_teste = anadate['y_teste'][:]
X_val = anadate['X_val'][:]
y_val = anadate['y_val'][:]
#Fecha o arquivo
anadate.close()
print('Conjunto de treinamento', X_treino.shape, y_treino.shape)
print('Conjunto de validaรงรฃo', X_val.shape, y_val.shape)
print('Conjunto de teste', X_teste.shape, y_teste.shape)
def prepare_log_dir():
'''Limpa os arquivos de log e cria novos diretรณrios para colocar
o arquivo de log do tensorbard.'''
if tf.gfile.Exists(TENSORBOARD_SUMMARIES_DIR):
tf.gfile.DeleteRecursively(TENSORBOARD_SUMMARIES_DIR)
tf.gfile.MakeDirs(TENSORBOARD_SUMMARIES_DIR)
#Usando placeholder
comp = 32*32
x = tf.placeholder(tf.float32, shape = [None, 32, 32, 1], name='Input_Data')
y = tf.placeholder(tf.float32, shape = [None, 10], name='Input_Labels')
y_cls = tf.argmax(y, 1)
discard_rate = tf.placeholder(tf.float32, name='Discard_rate')
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'
def cnn_model_fn(features):
"""Funรงรฃo modelo para CNN."""
# Camada de entrada
# SVHN imagens sรฃo 32x32 pixels e 1 canal de cor
input_layer = tf.reshape(features, [-1, 32, 32, 1])
# Camada Convolucional #1
# Utiliza 32 filtros extraindo regiรตes de 5x5 pixels com funรงรฃo de ativaรงรฃo ReLU
# Com preenchimento para conservar a width and height (evitar que a saรญda "encolha").
# Input Tensor Shape: [batch_size, 32, 32, 1]
# Output Tensor Shape: [batch_size, 32, 32, 32]
conv1 = tf.layers.conv2d(
inputs=input_layer,
filters=32,
kernel_size=[5, 5],
padding="same",
activation=tf.nn.relu)
# Camada Pooling #1
# Primeira camada max pooling com um filtro 2x2 e um passo de 2
# Input Tensor Shape: [batch_size, 32, 32, 32]
# Output Tensor Shape: [batch_size, 14, 14, 32]
pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2)
# Camada Convolucional #2
# Computa 64 features usando um filtro 5x5
# Com preenchimento para conservar a width and height (evitar que a saรญda "encolha").
# Input Tensor Shape: [batch_size, 14, 14, 32]
# Output Tensor Shape: [batch_size, 14, 14, 64]
conv2 = tf.layers.conv2d(
inputs=pool1,
filters=64,
kernel_size=[5, 5],
padding="same",
activation=tf.nn.relu)
# Camada Pooling #2
# Segunda camada max pooling com um filtro 2x2 e um passo de 2
# Input Tensor Shape: [batch_size, 14, 14, 64]
# Output Tensor Shape: [batch_size, 8, 8, 64]
pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], strides=2)
# Flatten tensor em um batch de vetores
# Input Tensor Shape: [batch_size, 8, 8, 64]
# Output Tensor Shape: [batch_size, 8 * 8 * 64]
pool2_flat = tf.reshape(pool2, [-1, 8 * 8 * 64])
# Camada Dense
# Densely conectada camada com 1024 neuronios
# Input Tensor Shape: [batch_size, 8 * 8 * 64]
# Output Tensor Shape: [batch_size, 1024]
dense = tf.layers.dense(inputs=pool2_flat, units=1024, activation=tf.nn.relu)
# Adicionar operaรงรฃo de dropout; 0.6 probabilidade que o elemento serรก mantido
dropout = tf.layers.dropout(
inputs=dense, rate=discard_rate)
# Camada Logits
# Input Tensor Shape: [batch_size, 1024]
# Output Tensor Shape: [batch_size, 10]
logits = tf.layers.dense(inputs=dropout, units=10)
return logits
max_epochs = 1
num_examples = X_treino.shape[0]
#Calculando a prediรงรฃo, otimizaรงรฃo e a acurรกcia
with tf.name_scope('Model_Prediction'):
prediction = cnn_model_fn(x)
tf.summary.scalar('Model_Prediction', prediction)
prediction_cls = tf.argmax(prediction, 1)
with tf.name_scope('loss'):
loss = tf.reduce_mean(tf.losses.softmax_cross_entropy(
onehot_labels=y, logits=prediction))
tf.summary.scalar('loss', loss)
with tf.name_scope('Adam_optimizer'):
optimizer = tf.train.AdamOptimizer().minimize(loss)
#Verificando se a classe prevista รฉ igual ร verdadeira classe de cada imagem
correct_prediction = tf.equal(prediction_cls, y_cls)
#Checando o elenco prediction to float e calcular a mรฉdia
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
t_summary = tf.summary.merge_all()
#Abrindo a sessรฃo do Tensorflow e salvando o arquivo
sess = tf.Session()
summary_writer = tf.summary.FileWriter('/ArqAna/summary', sess.graph)
sess.run(tf.global_variables_initializer())
saver = tf.train.Saver()
pasta = 'ArqAna/'
if not os.path.exists(pasta):
os.makedirs(pasta)
diret = os.path.join(pasta, 'ana_svhn')
val_summary = sess.run(t_summary)
summary_writer.add_summary(val_summary)
summary_writer = tf.train.SummaryWriter('/ArqAna/svhn_classifier_logs',sess.graph)
#saver.restore(sess=session, save_path=diret)
#Inicializando
##Sem exemplos em cada batch para atualizar os pesos
batch_size = 100
#Discartando ou fuse % de neuronios em Modo de treinamento
discard_per = 0.7
#with tf.Session() as sess:
# sess.run(tf.global_variables_initializer())
def get_batch(X, y, batch_size=100):
for i in np.arange(0, y.shape[0], batch_size):
end = min(X.shape[0], i + batch_size)
yield(X[i:end],y[i:end])
#Calculando o treinamento
treino_loss = []
valid_loss = []
for epoch in range(max_epochs):
print ('Treinando a rede')
epoch_loss = 0
print ()
print ('Epoca ', epoch+1 , ': \n')
step = 0
## Treinando as epocas
for (epoch_x , epoch_y) in get_batch(X_treino, y_treino, batch_size):
_, treino_accu, c = sess.run([optimizer, accuracy, loss], feed_dict={x: epoch_x, y: epoch_y, discard_rate: discard_per})
treino_loss.append(c)
if(step%40 == 0):
print ("Passo:", step, "\n", "\nMini-Batch Loss : ", c)
print('Mini-Batch Acuracia :' , treino_accu*100.0, '%')
## Validando a prediction e os sumarios
accu = 0.0
for (epoch_x , epoch_y) in get_batch(X_val, y_val, 100):
correct, _c = sess.run([correct_prediction, loss], feed_dict={x: epoch_x, y: epoch_y, discard_rate: 0.0})
valid_loss.append(_c)
accu+= np.sum(correct[correct == True])
print('Validaรงรฃo Acuracia :' , accu*100.0/y_val.shape[0], '%')
print ()
step = step + 1
print ('Epoca', epoch+1, 'completado em', max_epochs)
## Testando a prediction e os sumarios
accu = 0.0
for (epoch_x , epoch_y) in get_batch(X_teste, y_teste, 100):
correct = sess.run([correct_prediction], feed_dict={x: epoch_x, y: epoch_y, discard_rate: 0.0})
accu+= np.sum(correct[correct == True])
print('Teste Acuracia :' , accu*100.0/y_teste.shape[0], '%')
print ()
#Definindo a funรงรฃo de plotar imagens randomicas do conjunto de imagens
def plot_images(images, nrows, ncols, cls_true, cls_pred=None):
# Inicialize o subplotgrid
fig, axes = plt.subplots(nrows, ncols)
# Seleciona randomicamente nrows * ncols imagens
rs = np.random.choice(images.shape[0], nrows*ncols)
# Para cada eixo objeto na grid
for i, ax in zip(rs, axes.flat):
# Prediรงรตes que nรฃo passaram
if cls_pred is None:
title = "True: {0}".format(np.argmax(cls_true[i]))
# Quando as prediรงรตes passaram mostra labels + predictions
else:
title = "True: {0}, Pred: {1}".format(np.argmax(cls_true[i]), cls_pred[i])
# Mostra a imagem
ax.imshow(images[i,:,:,0], cmap='binary')
# Anota a imagem
ax.set_title(title)
# Nรฃo sobrescreve a grid
ax.set_xticks([])
ax.set_yticks([])
#Plotando as imagens do treino
plot_images(X_treino, 2, 4, y_treino);
#Avaliar os dados de teste
teste_pred = []
for (epoch_x , epoch_y) in get_batch(X_teste, y_teste, 100):
correct = sess.run([prediction_cls], feed_dict={x: epoch_x, y: epoch_y, discard_rate: 0.0})
teste_pred.append((np.asarray(correct, dtype=int)).T)
#Converter a lista numa lista de numpy array
def flatten(lists):
results = []
for numbers in lists:
for x in numbers:
results.append(x)
return np.asarray(results)
flat_array = flatten(teste_pred)
flat_array = (flat_array.T)
flat_array = flat_array[0]
flat_array.shape
#Plotando os resultados classificados errados
incorrect = flat_array != np.argmax(y_teste, axis=1)
images = X_teste[incorrect]
cls_true = y_teste[incorrect]
cls_pred = flat_array[incorrect]
plot_images(images, 2, 6, cls_true, cls_pred);
#Plotando os resultados classificados corretos de uma amostra randomica do conjunto de teste
correct = np.invert(incorrect)
images = X_teste[correct]
cls_true = y_teste[correct]
cls_pred = flat_array[correct]
plot_images(images, 2, 6, cls_true, cls_pred);
#Plotando a perda do treinamento e da validaรงรฃo
import matplotlib.pyplot as plt
plt.plot(treino_loss ,'r')
plt.plot(valid_loss, 'g')
prepare_log_dir()
#Salvando o arquivo ArqAna
saver.save(sess = sess, save_path = diret)
```
|
closed
|
2018-05-21T11:17:54Z
|
2018-05-21T20:39:38Z
|
https://github.com/tensorlayer/TensorLayer/issues/643
|
[] |
AnaValeriaS
| 1
|
modelscope/data-juicer
|
data-visualization
| 175
|
[MM] audio duration filter
|
1. Support audio reading better.
2. Add a Filter based on the duration of audios.
|
closed
|
2024-01-12T07:09:42Z
|
2024-01-17T09:57:16Z
|
https://github.com/modelscope/data-juicer/issues/175
|
[
"enhancement",
"dj:multimodal",
"dj:op"
] |
HYLcool
| 0
|
LAION-AI/Open-Assistant
|
python
| 3,707
|
Bug chat
|
Hi, I cannot write a chat in Open Assistant, Is this normal ?
|
closed
|
2023-10-04T19:20:18Z
|
2023-11-28T07:34:09Z
|
https://github.com/LAION-AI/Open-Assistant/issues/3707
|
[] |
NeWZekh
| 3
|
aio-libs/aiomysql
|
sqlalchemy
| 92
|
How to get the log for sql
|
How to get the log for sql statement
|
closed
|
2016-08-03T14:57:52Z
|
2017-04-17T10:33:46Z
|
https://github.com/aio-libs/aiomysql/issues/92
|
[] |
631068264
| 5
|
pydata/xarray
|
pandas
| 9,539
|
Creating DataTree from DataArrays
|
### What is your issue?
Original issue can be found here: https://github.com/xarray-contrib/datatree/issues/320
## Creating DataTree from DataArrays
What are the best ways to create a DataTree from DataArrays?
I don't usually work with Dataset objects, but rather with DataArray objects. I want the option to re-use the (physics-motived) arithmetic throughout my code, but manipulating multiple DataArrays at once. I initially thought about combining those DataArrays into a Dataset, but then learned that is not really a sufficient solution (more details [here](https://github.com/pydata/xarray/issues/8787)). DataTree seems like a better solution!
The only issue I am having right now is that DataTree seems to be adding more complexity than I need, because it is converting everything into Datasets, instead of allowing me to have a tree of DataArrays. This is not a deal-breaker for me, however it definitely increases the mental load of using the DataTree code. One thing that would help _significantly_ would be an easy way to create a DataTree from a list or iterable of arrays.
Here are my suggestions. Please let me know if there are already easy ways to do this which I just didn't find yet!
## 1 - Improve DataTree.from_dict
There is currently `DataTree.from_dict`, however this fails for unnamed DataArrays (when using `datatree.__version__=='0.0.14'`). For example:
```python
import numpy as np
import xarray as xr
import datatree as xrd
# generate some 3D data
nx, ny, nz = (32, 32, 32)
dx, dy, dz = (0.1, 0.1, 0.1)
coords=dict(x=np.arange(nx)*dx, y=np.arange(ny)*dy, z=np.arange(nz)*dz)
array = xr.DataArray(np.random.random((nx,ny,nz)), coords=coords)
# take some 2D slices of the 3D data
arrx = array.isel(x=0)
arry = array.isel(y=0)
arrz = array.isel(z=0)
# combine those slices into a single object so we can apply operations to all arrays at once,
# using the same interface & code for applying operations to a single DataArray!
tree = xrd.DataTree.from_dict(dict(x0=arrx, y0=arry, z0=arrz))
```
This fails, with: `ValueError: unable to convert unnamed DataArray to a Dataset without providing an explicit name`
However, it will instead succeed if I give names to the arrays:
```python
arrx.name = 'x0'
arry.name = 'y0'
arrz.name = 'z0'
tree = xrd.DataTree.from_dict(dict(x0=arrx, y0=arry, z0=arrz))
```
I suggest to improve the `DataTree.from_dict` method so that it will instead succeed if provided unnamed DataArrays. The behavior in this case should be to construct the Dataset from each DataArray but using the key provided in from_dict as the DataArray.name, for any DataArray without a name.
## 2 - Add a DataTree.from_arrays method
Ideally, I would like to be able to do something like (using arrays from the example above):
```python
tree = xrd.DataTree.from_arrays([arrx, arry, arrz])
```
This should work for any list of named arrays, and can use the array names to infer keys for the resulting DataTree. It will probably be easy to implement, I am thinking something like this:
```python
@classmethod
def from_arrays(cls, arrays):
if any(getattr(arr, name, None) is None for arr in arrays):
raise Exception('from_arrays requires all arrays to have names!')
d = {arr.name: arr for arr in arrays}
return cls.from_dict(d)
```
## 3 - Allow DataTree data to be DataArrays instead of requiring that they are Datasets
This would be excellent for me as an end-user, however I am guessing it would be really difficult to implement. I would understand if this is not feasible!
|
open
|
2024-09-23T19:00:26Z
|
2024-09-23T19:28:14Z
|
https://github.com/pydata/xarray/issues/9539
|
[
"design question",
"topic-DataTree"
] |
eni-awowale
| 1
|
microsoft/JARVIS
|
deep-learning
| 48
|
Error when start web server.
|
```
(jarvis) m200@DESKTOP-K0LFJU8:~/workdir/JARVIS/web$ npm run dev
> vue3-ts-vite-router-tailwindcss@0.0.0 dev
> vite
file:///home/m200/workdir/JARVIS/web/node_modules/vite/bin/vite.js:7
await import('source-map-support').then((r) => r.default.install())
^^^^^
SyntaxError: Unexpected reserved word
at Loader.moduleStrategy (internal/modules/esm/translators.js:133:18)
at async link (internal/modules/esm/module_job.js:42:21)
```
Any idea?
|
closed
|
2023-04-05T16:56:29Z
|
2023-04-07T06:18:28Z
|
https://github.com/microsoft/JARVIS/issues/48
|
[] |
lychees
| 2
|
graphql-python/graphene-django
|
graphql
| 943
|
Date type not accepting ISO DateTime string.
|
This causes error

But this works fine

Is it a bug or feature?
|
closed
|
2020-04-21T05:46:15Z
|
2020-04-25T13:13:35Z
|
https://github.com/graphql-python/graphene-django/issues/943
|
[] |
a-c-sreedhar-reddy
| 1
|
redis/redis-om-python
|
pydantic
| 656
|
Unable to declare a Vector-Field for a JsonModel
|
## The Bug
I'm trying to declare a Field as a Vector-Field by setting the `vector_options` on the `Field`.
Pydantic is then forcing me to annotate the field with a proper type.
But with any possible type for a vector, I always get errors.
The one that I think should definitly work is `list[float]`, but this results in
`AttributeError: type object 'float' has no attribute '__origin__'`.
### Example
```python
class Group(JsonModel):
articles: List[Article]
tender_text: str = Field(index=False)
tender_embedding: list[float] = Field(
index=True,
vector_options=VectorFieldOptions(
algorithm=VectorFieldOptions.ALGORITHM.FLAT,
type=VectorFieldOptions.TYPE.FLOAT32,
dimension=3,
distance_metric=VectorFieldOptions.DISTANCE_METRIC.COSINE
))
```
## No Documentation for Vector-Fields?
There seems to be no documentation about this feature.
I just found a merge-request which describes some features a bit, but there is no example, test or any documentation about this.
|
open
|
2024-09-17T09:58:41Z
|
2024-10-05T14:59:26Z
|
https://github.com/redis/redis-om-python/issues/656
|
[] |
MarkusPorti
| 2
|
sqlalchemy/sqlalchemy
|
sqlalchemy
| 10,888
|
OrderingList typing incorrect / incomplete and implementation doesn't accept non-int indices
|
### Describe the bug
There are a number of issues with the sqlalchemy.ext.orderinglist:OrderingList type hints and implementation:
1. The ordering function is documented as being allowed to return any type of object, and the tests demonstrate this by returning strings. The type annotation requires it to return integers, only, however.
2. The `OrderingList` signature marks the `ordering_attr` as optional, but accepting `None` here would lead to a type error when `getattr(entity, self.ordering_attr)` is called.
3. List objects accept any `SupportsIndex` type object, one that implements the [`.__index__()` method](https://docs.python.org/3/reference/datamodel.html#object.__index__). The codebase assumes it is always an integer (`count_from_1()` and `count_from_n_factory()` use addition with the value, which `SupportsIndex` objects don't need to support). It's probably best to use `operator.index()` on the value before passing it to the ordering function.
4. The `ordering_list()` ignores the `_T` type variable in the `OrderingFunc` and `OrderingList` signatures
5. The `OrderingFunc.ordering_func` annotation ignores the `_T` type variable in the `OrderingFunc` signature
6. List objects accept setting values via a slice and any iterable, e.g. `listobj[start:stop] = values_generator()`, but `OrderingList.__setitem__` with a slice object assumes the value is a sequence (supporting indexing), where a list accepts any iterable, plus it doesn't adjust the list length when the slice is being assigned fewer or more elements or handle slices with stride other than 1. This issue is mostly mitigated by the SQLAlchemy collections instrumentation (which wraps `OrderingList.__setitem__` with an implementation that handles slices *almost* correctly, only assuming a sequence when slices with a stride are used). But, there may be situations where `OrderingList` is used without this instrumentation. The internal implementation of `list.__setitem__()` converts the passed-in object to a list first, consuming the iterable; `OrderingList.__setitem__()` should do the same, plus it should handle slice strides.
7. There are no type annotations for the `count_from_*` utility functions or the overridden list methods.
### SQLAlchemy Version in Use
2.0.25
### Python Version
3.11
### Operating system
Linux
### To Reproduce
```python
# point 2, ordering_attr is not optional
from sqlalchemy.ext.orderinglist import OrderingList
class Item:
position = None
ol = OrderingList()
ol.append(Item())
"""
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File ".../sqlalchemy/lib/sqlalchemy/ext/orderinglist.py", line 339, in append
self._order_entity(len(self) - 1, entity, self.reorder_on_append)
File "...//sqlalchemy/lib/sqlalchemy/ext/orderinglist.py", line 327, in _order_entity
have = self._get_order_value(entity)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "...//sqlalchemy/lib/sqlalchemy/ext/orderinglist.py", line 308, in _get_order_value
return getattr(entity, self.ordering_attr)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: attribute name must be string, not 'NoneType'
"""
# point 6, slice assigment of iterable items
ol = OrderingList('position')
ol[:] = (Item() for _ in range(3))
"""
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "...//sqlalchemy/lib/sqlalchemy/ext/orderinglist.py", line 375, in __setitem__
self.__setitem__(i, entity[i])
~~~~~~^^^
TypeError: 'generator' object is not subscriptable
"""
# point 6, slice assigment length not matching item length
ol = OrderingList('position')
ol.append(Item())
# assign 1 item to a slice of length 0
ol[:0] = [Item()]
assert len(ol) == 2 # assertion error, length is 1
# assign 0 items to a slice of length 1
ol[:] = []
"""
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File ".../sqlalchemy/lib/sqlalchemy/ext/orderinglist.py", line 375, in __setitem__
self.__setitem__(i, entity[i])
~~~~~~^^^
IndexError: list index out of range
"""
```
### Additional context
I'll be providing a PR that addresses all these issues.
The slice issues are handled by just calling `super().__setitem__(slice, values)` and then `self._reorder()`. It's a cop-out but one that only applies to cases where `OrderedList` is not used as instrumented attribute, which is very much not the common case.
|
open
|
2024-01-15T18:08:09Z
|
2024-01-23T19:35:24Z
|
https://github.com/sqlalchemy/sqlalchemy/issues/10888
|
[
"sqlalchemy.ext",
"PRs (with tests!) welcome",
"typing"
] |
mjpieters
| 3
|
keras-team/autokeras
|
tensorflow
| 1,746
|
Bug: predict doesn't work when final_fit is not performed
|
### Bug Description
When final_fit is not performed, there are downstream issues with predict.
The issue is in `autokeras/engine/tuner.py` in `AutoTuner.search` when the `else` branch is evaluated in line 220.
```
model = self.get_best_models()[0]
history = None
pipeline = pipeline_module.load_pipeline(
self._pipeline_path(self.oracle.get_best_trials(1)[0].trial_id)
)
```
Then when predict is subsequently performed there are first a bunch of warnings;
```
INFO:tensorflow:Oracle triggered exit
INFO:tensorflow:Assets written to: ./structured_data_classifier/best_model/assets
WARNING:tensorflow:Detecting that an object or model or tf.train.Checkpoint is being deleted with unrestored values. See the following logs for the specific values in question. To silence these warnings, use `status.expect_partial()`. See https://www.tensorflow.org/api_docs/python/tf/train/Checkpoint#restorefor details about the status object returned by the restore function.
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer.iter
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer.beta_1
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer.beta_2
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer.decay
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer.learning_rate
2022-06-29 08:05:16.340694: W tensorflow/core/framework/op_kernel.cc:1745] OP_REQUIRES failed at lookup_table_op.cc:929 : FAILED_PRECONDITION: Table not initialized.
2022-06-29 08:05:16.340722: W tensorflow/core/framework/op_kernel.cc:1745] OP_REQUIRES failed at lookup_table_op.cc:929 : FAILED_PRECONDITION: Table not initialized.
2022-06-29 08:05:16.340759: W tensorflow/core/framework/op_kernel.cc:1745] OP_REQUIRES failed at lookup_table_op.cc:929 : FAILED_PRECONDITION: Table not initialized.
2022-06-29 08:05:16.340777: W tensorflow/core/framework/op_kernel.cc:1745] OP_REQUIRES failed at lookup_table_op.cc:929 : FAILED_PRECONDITION: Table not initialized.
2022-06-29 08:05:16.340797: W tensorflow/core/framework/op_kernel.cc:1745] OP_REQUIRES failed at lookup_table_op.cc:929 : FAILED_PRECONDITION: Table not initialized.
2022-06-29 08:05:16.340811: W tensorflow/core/framework/op_kernel.cc:1745] OP_REQUIRES failed at lookup_table_op.cc:929 : FAILED_PRECONDITION: Table not initialized.
2022-06-29 08:05:16.340824: W tensorflow/core/framework/op_kernel.cc:1745] OP_REQUIRES failed at lookup_table_op.cc:929 : FAILED_PRECONDITION: Table not initialized.
```
and then the following error;
```
Traceback (most recent call last):
File ~/Programming/temp/autokeras_structured.py:44 in <module>
predicted_y = clf.predict(x_test)
File ~/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/autokeras/tasks/structured_data.py:165 in predict
return super().predict(x=x, **kwargs)
File ~/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/autokeras/auto_model.py:455 in predict
y = model.predict(dataset, **kwargs)
File ~/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/keras/utils/traceback_utils.py:67 in error_handler
raise e.with_traceback(filtered_tb) from None
File ~/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/tensorflow/python/eager/execute.py:54 in quick_execute
tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
FailedPreconditionError: Graph execution error:
Detected at node 'model/multi_category_encoding/string_lookup_10/None_Lookup/LookupTableFindV2' defined at (most recent call last):
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/spyder_kernels/console/__main__.py", line 23, in <module>
start.main()
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/spyder_kernels/console/start.py", line 328, in main
kernel.start()
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/ipykernel/kernelapp.py", line 677, in start
self.io_loop.start()
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/tornado/platform/asyncio.py", line 199, in start
self.asyncio_loop.run_forever()
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/asyncio/base_events.py", line 601, in run_forever
self._run_once()
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/asyncio/base_events.py", line 1905, in _run_once
handle._run()
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/asyncio/events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/ipykernel/kernelbase.py", line 471, in dispatch_queue
await self.process_one()
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/ipykernel/kernelbase.py", line 460, in process_one
await dispatch(*args)
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/ipykernel/kernelbase.py", line 367, in dispatch_shell
await result
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/ipykernel/kernelbase.py", line 662, in execute_request
reply_content = await reply_content
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/ipykernel/ipkernel.py", line 360, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/ipykernel/zmqshell.py", line 532, in run_cell
return super().run_cell(*args, **kwargs)
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/IPython/core/interactiveshell.py", line 2881, in run_cell
result = self._run_cell(
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/IPython/core/interactiveshell.py", line 2936, in _run_cell
return runner(coro)
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/IPython/core/async_helpers.py", line 129, in _pseudo_sync_runner
coro.send(None)
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/IPython/core/interactiveshell.py", line 3135, in run_cell_async
has_raised = await self.run_ast_nodes(code_ast.body, cell_name,
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/IPython/core/interactiveshell.py", line 3338, in run_ast_nodes
if await self.run_code(code, result, async_=asy):
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/IPython/core/interactiveshell.py", line 3398, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "/var/folders/_h/8xl2b_ss5tq_0k8c51zx4vb00000gn/T/ipykernel_61037/3742600826.py", line 1, in <cell line: 1>
runfile('/Users/PS/Programming/temp/autokeras_structured.py', wdir='/Users/PS/Programming/temp')
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/spyder_kernels/customize/spydercustomize.py", line 577, in runfile
exec_code(file_code, filename, ns_globals, ns_locals,
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/spyder_kernels/customize/spydercustomize.py", line 465, in exec_code
exec(compiled, ns_globals, ns_locals)
File "/Users/PS/Programming/temp/autokeras_structured.py", line 44, in <module>
predicted_y = clf.predict(x_test)
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/autokeras/tasks/structured_data.py", line 165, in predict
return super().predict(x=x, **kwargs)
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/autokeras/auto_model.py", line 455, in predict
y = model.predict(dataset, **kwargs)
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
return fn(*args, **kwargs)
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/keras/engine/training.py", line 2033, in predict
tmp_batch_outputs = self.predict_function(iterator)
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/keras/engine/training.py", line 1845, in predict_function
return step_function(self, iterator)
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/keras/engine/training.py", line 1834, in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/keras/engine/training.py", line 1823, in run_step
outputs = model.predict_step(data)
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/keras/engine/training.py", line 1791, in predict_step
return self(x, training=False)
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
return fn(*args, **kwargs)
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/keras/engine/training.py", line 490, in __call__
return super().__call__(*args, **kwargs)
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
return fn(*args, **kwargs)
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/keras/engine/base_layer.py", line 1014, in __call__
outputs = call_fn(inputs, *args, **kwargs)
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
return fn(*args, **kwargs)
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/keras/engine/functional.py", line 458, in call
return self._run_internal_graph(
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/keras/engine/functional.py", line 596, in _run_internal_graph
outputs = node.layer(*args, **kwargs)
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
return fn(*args, **kwargs)
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/keras/engine/base_layer.py", line 1014, in __call__
outputs = call_fn(inputs, *args, **kwargs)
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
return fn(*args, **kwargs)
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/autokeras/keras_layers.py", line 99, in call
for input_node, encoding_layer in zip(split_inputs, self.encoding_layers):
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/autokeras/keras_layers.py", line 100, in call
if encoding_layer is None:
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/autokeras/keras_layers.py", line 108, in call
output_nodes.append(
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
return fn(*args, **kwargs)
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/keras/engine/base_layer.py", line 1014, in __call__
outputs = call_fn(inputs, *args, **kwargs)
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
return fn(*args, **kwargs)
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/keras/layers/preprocessing/index_lookup.py", line 628, in call
lookups = self._lookup_dense(inputs)
File "/Users/PS/opt/miniconda3/envs/envspyder/lib/python3.9/site-packages/keras/layers/preprocessing/index_lookup.py", line 657, in _lookup_dense
lookups = self.lookup_table.lookup(inputs)
Node: 'model/multi_category_encoding/string_lookup_10/None_Lookup/LookupTableFindV2'
Table not initialized.
[[{{node model/multi_category_encoding/string_lookup_10/None_Lookup/LookupTableFindV2}}]] [Op:__inference_predict_function_10353]
```
When final_fit is performed there are no issues, so my guess is that there is problem with retrieving the model or pipeline, but unfortunately I can't seem to narrow it down further. Any ideas are greatly appreciated.
### Bug Reproduction
Code for reproducing the bug:
```
import numpy as np
import pandas as pd
import tensorflow as tf
import autokeras as ak
import pandas as pd
from tensorflow.keras import callbacks as tf_callbacks
TRAIN_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/train.csv"
TEST_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/eval.csv"
train_file_path = tf.keras.utils.get_file("train.csv", TRAIN_DATA_URL)
test_file_path = tf.keras.utils.get_file("eval.csv", TEST_DATA_URL)
train_data = pd.read_csv(train_file_path)
test_data = pd.read_csv(test_file_path)
y_train = train_data.survived
y_test = test_data.survived
x_train = train_data.drop('survived', axis=1)
x_test = test_data.drop('survived', axis=1)
# Initialize the structured data classifier.
clf = ak.StructuredDataClassifier(
overwrite=True, max_trials=2
) # It tries 3 different models.
# Feed the structured data classifier with training data.
callbacks = [tf_callbacks.EarlyStopping(patience=10, min_delta=1e-4)]
clf.fit(x_train, y_train, epochs=1, validation_data=(x_test, y_test), callbacks=callbacks)
# Predict with the best model.
predicted_y = clf.predict(x_test)
# Evaluate the best model with testing data.
print(clf.evaluate(x_test, y_test))
```
This problem affects all classifiers and regressors.
### Expected Behavior
That predict works without errors or warnings.
### Setup Details
Include the details about the versions of:
- OS type and version: macOS Big Sur: version 11.6.7
- Python: Python 3.9.12
- autokeras: 1.0.19
- keras-tuner: 1.1.2
- scikit-learn: 1.1.1
- numpy: 1.23.0
- pandas: 1.4.2
- tensorflow: 2.9.1
### Additional context
<!---
If applicable, add any other context about the problem.
-->
|
open
|
2022-06-29T06:33:38Z
|
2024-03-26T07:18:28Z
|
https://github.com/keras-team/autokeras/issues/1746
|
[
"bug report"
] |
psaks
| 8
|
yvann-ba/Robby-chatbot
|
streamlit
| 67
|
Can we able to connect other SQL and NoSQL databases
|
Thanks @yvann-hub for great project.
I would like to know, Can we able to connect other SQL and NoSQL databases like mongodb databases to make a natural language query on its data. Is that possible?!
|
open
|
2023-10-20T06:05:00Z
|
2023-10-20T06:05:39Z
|
https://github.com/yvann-ba/Robby-chatbot/issues/67
|
[] |
sainisanjay
| 0
|
PedroBern/django-graphql-auth
|
graphql
| 142
|
Nested Form field?
|
# Prerequisites
* [ ] Is it a bug?
* [*] Is it a new feature?
* [*] Is it a a question?
* [ ] Can you reproduce the problem?
* [ ] Are you running the latest version?
* [ ] Did you check for similar issues?
* [ ] Did you perform a cursory search?
For more information, see the [CONTRIBUTING](https://github.com/PedroBern/django-graphql-auth/blob/master/CONTRIBUTING.md) guide.
# Description
Thank you for this amazing package, I'm following same apprach to perform other crud operatations. and It is working fine. But I've one questions, how can i implement nested field muation and also how can i upload images. I mean simple model form works or not?
for example, I've moodel called Media, and Here is mutations for model.
```
class CreateMedia(
RelayMutationMixin,
DynamicInputMixin,
Output,
graphene.ClientIDMutation):
form = MediaForm
_inputs = list(form.Meta.fields)
media = graphene.Field(MediaNode)
@classmethod
@login_required
def resolve_mutation(cls, root, info, **kwargs):
f = cls.form(kwargs)
if f.is_valid():
try:
media = f.save()
return cls(success=True, media=media)
except Exception as e:
return cls(success=False, errors="Some things went wrongs")
else:
return cls(success=False, errors=f.errors.get_json_data())
```
and here is model
```
class MediaForm(forms.ModelForm):
class Meta:
model = Media
fields = ("media", "content_type", "object_id")
```
But it is not working
|
closed
|
2022-01-29T06:26:28Z
|
2022-01-30T16:25:16Z
|
https://github.com/PedroBern/django-graphql-auth/issues/142
|
[] |
saileshkush95
| 0
|
autogluon/autogluon
|
data-science
| 3,980
|
[Tabular] Improve feature_metadata verification
|
When a user specifies a custom `feature_metadata` object during fit, add additional guardrails to verify the compatibility between this feature metadata and the user provided training data, otherwise cryptic errors can occur downstream. There might also be a bug with NaN fills in this scenario.
AutoGluon Version: 1.x
Example with custom `feature_metadata`:
```
Stage 1 Generators:
Fitting AsTypeFeatureGenerator...
WARNING: Actual dtype differs from dtype in FeatureMetadata for feature "loan_id". Actual dtype: int | Expected dtype: float
```
At test time when NaN is present:
```
---------------------------------------------------------------------------
IntCastingNaNError Traceback (most recent call last)
Cell In[7], line 1
----> 1 predictor.predict(test_data)
File ~/SageMaker/autogluon_1_0_0_python_39/lib/python3.9/site-packages/autogluon/tabular/predictor/predictor.py:1931, in TabularPredictor.predict(self, data, model, as_pandas, transform_features, decision_threshold)
1929 if decision_threshold is None:
1930 decision_threshold = self.decision_threshold
-> 1931 return self._learner.predict(X=data, model=model, as_pandas=as_pandas, transform_features=transform_features, decision_threshold=decision_threshold)
File ~/SageMaker/autogluon_1_0_0_python_39/lib/python3.9/site-packages/autogluon/tabular/learner/abstract_learner.py:208, in AbstractTabularLearner.predict(self, X, model, as_pandas, inverse_transform, transform_features, decision_threshold)
206 decision_threshold = 0.5
207 X_index = copy.deepcopy(X.index) if as_pandas else None
--> 208 y_pred_proba = self.predict_proba(
209 X=X, model=model, as_pandas=False, as_multiclass=False, inverse_transform=False, transform_features=transform_features
210 )
211 problem_type = self.label_cleaner.problem_type_transform or self.problem_type
212 y_pred = get_pred_from_proba(y_pred_proba=y_pred_proba, problem_type=problem_type, decision_threshold=decision_threshold)
File ~/SageMaker/autogluon_1_0_0_python_39/lib/python3.9/site-packages/autogluon/tabular/learner/abstract_learner.py:188, in AbstractTabularLearner.predict_proba(self, X, model, as_pandas, as_multiclass, inverse_transform, transform_features)
186 else:
187 if transform_features:
--> 188 X = self.transform_features(X)
189 y_pred_proba = self.load_trainer().predict_proba(X, model=model)
190 y_pred_proba = self._post_process_predict_proba(
191 y_pred_proba=y_pred_proba, as_pandas=as_pandas, index=X_index, as_multiclass=as_multiclass, inverse_transform=inverse_transform
192 )
File ~/SageMaker/autogluon_1_0_0_python_39/lib/python3.9/site-packages/autogluon/tabular/learner/abstract_learner.py:464, in AbstractTabularLearner.transform_features(self, X)
462 def transform_features(self, X):
463 for feature_generator in self.feature_generators:
--> 464 X = feature_generator.transform(X)
465 return X
File ~/SageMaker/autogluon_1_0_0_python_39/lib/python3.9/site-packages/autogluon/features/generators/abstract.py:351, in AbstractFeatureGenerator.transform(self, X)
349 if self._pre_astype_generator:
350 X = self._pre_astype_generator.transform(X)
--> 351 X_out = self._transform(X)
352 if self._post_generators:
353 X_out = self._transform_generators(X=X_out, generators=self._post_generators)
File ~/SageMaker/autogluon_1_0_0_python_39/lib/python3.9/site-packages/autogluon/features/generators/bulk.py:175, in BulkFeatureGenerator._transform(self, X)
173 feature_df_list = []
174 for generator in generator_group:
--> 175 feature_df_list.append(generator.transform(X))
177 if not feature_df_list:
178 X = DataFrame(index=X.index)
File ~/SageMaker/autogluon_1_0_0_python_39/lib/python3.9/site-packages/autogluon/features/generators/abstract.py:351, in AbstractFeatureGenerator.transform(self, X)
349 if self._pre_astype_generator:
350 X = self._pre_astype_generator.transform(X)
--> 351 X_out = self._transform(X)
352 if self._post_generators:
353 X_out = self._transform_generators(X=X_out, generators=self._post_generators)
File ~/SageMaker/autogluon_1_0_0_python_39/lib/python3.9/site-packages/autogluon/features/generators/astype.py:157, in AsTypeFeatureGenerator._transform(self, X)
151 X[with_null_features] = X[with_null_features].fillna(0)
153 if self._type_map_real_opt:
154 # TODO: Confirm this works with sparse and other feature types!
155 # FIXME: Address situation where test-time invalid type values cause crash:
156 # https://stackoverflow.com/questions/49256211/how-to-set-unexpected-data-type-to-na?noredirect=1&lq=1
--> 157 X = X.astype(self._type_map_real_opt)
158 return X
File ~/SageMaker/autogluon_1_0_0_python_39/lib/python3.9/site-packages/pandas/core/generic.py:6513, in NDFrame.astype(self, dtype, copy, errors)
6511 else:
6512 try:
-> 6513 res_col = col.astype(dtype=cdt, copy=copy, errors=errors)
6514 except ValueError as ex:
6515 ex.args = (
6516 f"{ex}: Error while type casting for column '{col_name}'",
6517 )
File ~/SageMaker/autogluon_1_0_0_python_39/lib/python3.9/site-packages/pandas/core/generic.py:6534, in NDFrame.astype(self, dtype, copy, errors)
6530 results = [ser.astype(dtype, copy=copy) for _, ser in self.items()]
6532 else:
6533 # else, only a single dtype is given
-> 6534 new_data = self._mgr.astype(dtype=dtype, copy=copy, errors=errors)
6535 res = self._constructor_from_mgr(new_data, axes=new_data.axes)
6536 return res.__finalize__(self, method="astype")
File ~/SageMaker/autogluon_1_0_0_python_39/lib/python3.9/site-packages/pandas/core/internals/managers.py:414, in BaseBlockManager.astype(self, dtype, copy, errors)
411 elif using_copy_on_write():
412 copy = False
--> 414 return self.apply(
415 "astype",
416 dtype=dtype,
417 copy=copy,
418 errors=errors,
419 using_cow=using_copy_on_write(),
420 )
File ~/SageMaker/autogluon_1_0_0_python_39/lib/python3.9/site-packages/pandas/core/internals/managers.py:354, in BaseBlockManager.apply(self, f, align_keys, **kwargs)
352 applied = b.apply(f, **kwargs)
353 else:
--> 354 applied = getattr(b, f)(**kwargs)
355 result_blocks = extend_blocks(applied, result_blocks)
357 out = type(self).from_blocks(result_blocks, self.axes)
File ~/SageMaker/autogluon_1_0_0_python_39/lib/python3.9/site-packages/pandas/core/internals/blocks.py:616, in Block.astype(self, dtype, copy, errors, using_cow)
596 """
597 Coerce to the new dtype.
598
(...)
612 Block
613 """
614 values = self.values
--> 616 new_values = astype_array_safe(values, dtype, copy=copy, errors=errors)
618 new_values = maybe_coerce_values(new_values)
620 refs = None
File ~/SageMaker/autogluon_1_0_0_python_39/lib/python3.9/site-packages/pandas/core/dtypes/astype.py:238, in astype_array_safe(values, dtype, copy, errors)
235 dtype = dtype.numpy_dtype
237 try:
--> 238 new_values = astype_array(values, dtype, copy=copy)
239 except (ValueError, TypeError):
240 # e.g. _astype_nansafe can fail on object-dtype of strings
241 # trying to convert to float
242 if errors == "ignore":
File ~/SageMaker/autogluon_1_0_0_python_39/lib/python3.9/site-packages/pandas/core/dtypes/astype.py:183, in astype_array(values, dtype, copy)
180 values = values.astype(dtype, copy=copy)
182 else:
--> 183 values = _astype_nansafe(values, dtype, copy=copy)
185 # in pandas we don't store numpy str dtypes, so convert to object
186 if isinstance(dtype, np.dtype) and issubclass(values.dtype.type, str):
File ~/SageMaker/autogluon_1_0_0_python_39/lib/python3.9/site-packages/pandas/core/dtypes/astype.py:101, in _astype_nansafe(arr, dtype, copy, skipna)
96 return lib.ensure_string_array(
97 arr, skipna=skipna, convert_na_value=False
98 ).reshape(shape)
100 elif np.issubdtype(arr.dtype, np.floating) and dtype.kind in "iu":
--> 101 return _astype_float_to_int_nansafe(arr, dtype, copy)
103 elif arr.dtype == object:
104 # if we have a datetime/timedelta array of objects
105 # then coerce to datetime64[ns] and use DatetimeArray.astype
107 if lib.is_np_dtype(dtype, "M"):
File ~/SageMaker/autogluon_1_0_0_python_39/lib/python3.9/site-packages/pandas/core/dtypes/astype.py:146, in _astype_float_to_int_nansafe(values, dtype, copy)
142 """
143 astype with a check preventing converting NaN to an meaningless integer value.
144 """
145 if not np.isfinite(values).all():
--> 146 raise IntCastingNaNError(
147 "Cannot convert non-finite values (NA or inf) to integer"
148 )
149 if dtype.kind == "u":
150 # GH#45151
151 if not (values >= 0).all():
IntCastingNaNError: Cannot convert non-finite values (NA or inf) to integer: Error while type casting for column 'loan_id'
```
|
open
|
2024-03-14T18:58:00Z
|
2024-11-02T02:13:19Z
|
https://github.com/autogluon/autogluon/issues/3980
|
[
"bug",
"enhancement",
"module: tabular",
"priority: 2"
] |
Innixma
| 0
|
rasbt/watermark
|
jupyter
| 9
|
Why not use an ISO-8601 date format?
|
Hi there. Out of curiosity - how come you're not using an ISO-8601 date format (e.g. 2015-01-02 where this is unambiguously the 2nd of Jan 2015 and cannot be the 1st of Feb)? Would you accept a PR to switch to ISO-8601?
|
closed
|
2016-01-29T15:32:08Z
|
2016-01-31T18:10:13Z
|
https://github.com/rasbt/watermark/issues/9
|
[] |
ianozsvald
| 9
|
christabor/flask_jsondash
|
plotly
| 116
|
Switch chart types should ensure old el is removed.
|
This might not be an issue in all cases, but when converting from Image to Iframe (or vice versa), there is old content pushed down below when the new wrapper is appended. It should delete the old content first.
Once saved and after refresh, this problem doesn't exist, but it's a minor UI annoyance.
|
closed
|
2017-05-17T22:01:20Z
|
2017-05-20T00:52:55Z
|
https://github.com/christabor/flask_jsondash/issues/116
|
[
"bug"
] |
christabor
| 0
|
microsoft/nni
|
tensorflow
| 5,758
|
USER_CANCELED
|

When I submit the code to run on the server, without performing any operations, the status changes to "USER_CANCELED". Even the NNI code that used to run successfully before is now encountering this issue when I try to run it. Could anyone please advise on how to solve this problem?
|
closed
|
2024-03-19T11:25:33Z
|
2024-03-28T02:32:51Z
|
https://github.com/microsoft/nni/issues/5758
|
[] |
fantasy0905
| 0
|
albumentations-team/albumentations
|
machine-learning
| 1,717
|
[Feature Request] Return parameters from applied pipeline in Compose
|
It would be a great feature for debugging and for self supervised learning to have functionality exiting in ReplayCompose in Compose:
- [x] Return what transforms with what parameters were applied.
- [x] Reapply Compose based on a given set of transform parameters
|
closed
|
2024-05-10T18:38:04Z
|
2024-05-30T18:56:00Z
|
https://github.com/albumentations-team/albumentations/issues/1717
|
[
"enhancement"
] |
ternaus
| 1
|
sqlalchemy/sqlalchemy
|
sqlalchemy
| 10,288
|
Make `Mapped` covariant
|
### Describe the use case
`Mapped` is currently invariant. It would make sense to have it covariant. For instance, it would help better support describing SQLAlchemy tables with protocols.
(Already discussed in #10287 - this is mainly a copy-paste of it)
### Databases / Backends / Drivers targeted
All
### Example Use
Currenty, the following code does not type-check. Its aim is to be able to write a general helper function `get_parent_name` that operate on table protocols instead of actual tables.
```python
from typing import Protocol
from sqlalchemy import ForeignKey
from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column, relationship
# Protocols
class ParentProtocol(Protocol):
name: Mapped[str]
class ChildProtocol(Protocol):
# Read-only for simplicity, mutable protocol members are complicated,
# see https://mypy.readthedocs.io/en/latest/common_issues.html#covariant-subtyping-of-mutable-protocol-members-is-rejected
@property
def parent(self) -> Mapped[ParentProtocol]:
...
def get_parent_name(child: ChildProtocol) -> str:
return child.parent.name
# Implementations
class Base(DeclarativeBase):
pass
class Parent(Base):
__tablename__ = "parent"
name: Mapped[str] = mapped_column(primary_key=True)
class Child(Base):
__tablename__ = "child"
name: Mapped[str] = mapped_column(primary_key=True)
parent_name: Mapped[str] = mapped_column(ForeignKey(Parent.name))
parent: Mapped[Parent] = relationship()
assert get_parent_name(Child(parent=Parent(name="foo"))) == "foo"
```
Current mypy output:
```
test.py:48: error: Argument 1 to "get_parent_name" has incompatible type "Child"; expected "ChildProtocol" [arg-type]
test.py:48: note: Following member(s) of "Child" have conflicts:
test.py:48: note: parent: expected "Mapped[ParentProtocol]", got "Mapped[Parent]"
Found 1 error in 1 file (checked 1 source file)
```
### Additional context
_No response_
|
closed
|
2023-08-28T15:13:55Z
|
2023-12-27T22:12:35Z
|
https://github.com/sqlalchemy/sqlalchemy/issues/10288
|
[
"orm",
"use case",
"typing"
] |
RomeoDespres
| 8
|
tensorly/tensorly
|
numpy
| 412
|
AttributeError: module 'tensorly' has no attribute 'SVD_FUNS'
|
When trying to access the partial_tucker() , getting the Attribute Error.
partial_tucker(layer.weight.data,modes=[0, 1], rank=ranks, init='svd')
**Error:**
AttributeError: module 'tensorly' has no attribute 'SVD_FUNS'
|
closed
|
2022-06-20T17:13:43Z
|
2022-06-20T17:33:58Z
|
https://github.com/tensorly/tensorly/issues/412
|
[] |
vemusharan
| 1
|
piskvorky/gensim
|
nlp
| 3,054
|
Support for supervised models in FastText
|
#### Problem description
This is a feature request for the supervised part of the FastText model
#### Steps/code/corpus to reproduce
No code. I just wonder why the supervised part of FastText is not available.
#### Versions
```zsh
>>> import platform; print(platform.platform())
Linux-5.4.0-47-generic-x86_64-with-glibc2.29
>>> import sys; print("Python", sys.version)
Python 3.8.5 (default, Jul 28 2020, 12:59:40)
[GCC 9.3.0]
>>> import struct; print("Bits", 8 * struct.calcsize("P"))
Bits 64
>>> import numpy; print("NumPy", numpy.__version__)
NumPy 1.19.2
>>> import scipy; print("SciPy", scipy.__version__)
SciPy 1.5.2
>>> import gensim; print("gensim", gensim.__version__)
gensim 3.8.3
>>> from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION)
FAST_VERSION 1
```
|
closed
|
2021-02-27T14:01:19Z
|
2021-02-27T15:55:52Z
|
https://github.com/piskvorky/gensim/issues/3054
|
[
"feature"
] |
VolodyaCO
| 1
|
thtrieu/darkflow
|
tensorflow
| 676
|
How to test the network?
|
I want to test the network with the trained model. How can I test the model with the Pascal VOC 2007 test set?
Additionally, How many times should I train the network? the number of epochs is designated as 2000, but I think it is too many (compared to the Faster R-CNN). Should I use the pre-defined parameters?
|
open
|
2018-03-28T10:50:47Z
|
2021-10-15T02:45:47Z
|
https://github.com/thtrieu/darkflow/issues/676
|
[] |
bareblackfoot
| 6
|
noirbizarre/flask-restplus
|
api
| 605
|
Integrating database (mysql) with Flask-restplus
|
I am using the flask and flask_restplus to build an rest api in python. The directory structure i followed is same in the link below under the heading Multiple APIs with reusable namespaces
https://flask-restplus.readthedocs.io/en/stable/scaling.html
Now I am trying to add mySql database connection and for that I used the following code in app.py
from flaskext.mysql import MySQL
mysql = MySQL()
app.config['MYSQL_DATABASE_USER'] = 'name'
app.config['MYSQL_DATABASE_PASSWORD'] = 'test'
app.config['MYSQL_DATABASE_DB'] = 'textclassifier'
app.config['MYSQL_DATABASE_HOST'] = 'localhost'
mysql.init_app(app)
I tried using mysql object through
from app import mysql
which is wrong, i read the whole document but I could not find a way to add the mysql connection without changing the structure.
Does anyone has an idea how i can solve this ?
The same posted the same question on stackoverflow, but I have not received any answer till now and nor on chat
|
closed
|
2019-03-12T12:51:23Z
|
2019-03-12T16:13:31Z
|
https://github.com/noirbizarre/flask-restplus/issues/605
|
[] |
NiharikaSinghal
| 0
|
plotly/dash
|
jupyter
| 2,824
|
Can callback be set to take effect with the current page or globally
|
Can callback be set to take effect with the current page or globally? Is this more reasonable?
|
closed
|
2024-03-31T08:31:56Z
|
2024-04-02T17:16:57Z
|
https://github.com/plotly/dash/issues/2824
|
[] |
jaxonister
| 1
|
jupyter/docker-stacks
|
jupyter
| 2,018
|
Docker Hub subscription about to expire.
|
@mathbunnyru you seem to be the most active contributor.
Just to warn you that the docker subscription is about to expire and be downgraded to "free" in 6 days. I've contacted the Jupyter EC, they are aware on working on it I suppose.
|
closed
|
2023-10-25T15:41:48Z
|
2023-10-26T13:30:37Z
|
https://github.com/jupyter/docker-stacks/issues/2018
|
[] |
Carreau
| 7
|
ploomber/ploomber
|
jupyter
| 542
|
run profiling to determine loading times
|
We need to optimize the time it takes ploomber to load when running
`ploomber`
or `ploomber --help`
We need to run profiling and see the different load times of the python packages and how we can optimize it to get a faster user response.
|
closed
|
2022-02-04T21:57:48Z
|
2022-02-24T15:20:28Z
|
https://github.com/ploomber/ploomber/issues/542
|
[
"good first issue"
] |
idomic
| 5
|
AirtestProject/Airtest
|
automation
| 1,122
|
่ฟๆฅไธไบๆๆบ๏ผ่ฏๅซไธๅฐapp
|
**ๆ่ฟฐ้ฎ้ขbug**
้กน็ฎ่ฟ่กๅ๏ผๆๅผ็ธๅบAPP๏ผไผๅพช็ฏๅบ็ฐไปฅไธๆ็คบไฟกๆฏ๏ผ
iOS connection failed, please try pressing the home button to return to the desktop and try again.
iOS่ฟๆฅๅคฑ่ดฅ๏ผ่ฏทๅฐ่ฏๆhome้ฎๅๅฐๆก้ขๅๅ้่ฏ
ๆๅ้กน็ฎๅๆญขๆฅ้ไฟกๆฏๅฆไธ๏ผ
wda.exceptions.WDAStaleElementReferenceError: WDARequestError(status=110, value={'error': 'stale element reference', 'message': 'The previously found element "Application ID" is not present in the current view anymore. Make sure the application UI has the expected state. Original error: Error getting main window kAXErrorServerNotFound'})
**python ็ๆฌ:** `python3.9`
**airtest ็ๆฌ:** `1.2.6`
**่ฎพๅค:**
iPhone8
**ๅ
ถไป็ธๅ
ณไฟกๆฏ**
ๆดๆขiPhone12๏ผๅฏๆญฃๅธธ่ฟ่ก
|
open
|
2023-04-06T08:38:03Z
|
2023-04-06T08:38:03Z
|
https://github.com/AirtestProject/Airtest/issues/1122
|
[] |
yuanyuan0808
| 0
|
pytest-dev/pytest-django
|
pytest
| 290
|
Please included tests in pypi release
|
Downstream developers love to test during installation and packaging. Please include tests.
|
closed
|
2015-11-09T11:19:26Z
|
2018-09-16T07:03:16Z
|
https://github.com/pytest-dev/pytest-django/issues/290
|
[] |
jlec
| 2
|
plotly/jupyter-dash
|
dash
| 98
|
Re-define `@app.callback` in jupyter raise error: `Duplicate callback outputs`
|
When I run this cell **second** time since I need to modify some code in it, it show me errors.
This means when I modify the `update_figure`, I need to restart the kernel then re-run all the things again.


|
closed
|
2022-07-31T02:44:40Z
|
2022-07-31T03:11:21Z
|
https://github.com/plotly/jupyter-dash/issues/98
|
[] |
GF-Huang
| 1
|
iperov/DeepFaceLab
|
machine-learning
| 5,700
|
Bros, Can this code work well if I want to swap the face of images instead of videos? How should I realize it?
|
open
|
2023-07-10T02:14:02Z
|
2023-08-01T00:45:29Z
|
https://github.com/iperov/DeepFaceLab/issues/5700
|
[] |
Xiujian-LIANG
| 3
|
|
chaos-genius/chaos_genius
|
data-visualization
| 368
|
For anomaly detection batch train till N-now, then do incremental learning for better model adaption
|
For anomaly detection batch train till `N-now`, then do incremental learning for better model adaption. We also need to make this configurable as a global param.
|
open
|
2021-11-04T02:51:25Z
|
2021-11-05T08:48:43Z
|
https://github.com/chaos-genius/chaos_genius/issues/368
|
[
"๐ ๏ธ backend",
"๐งฎ algorithms",
"P3"
] |
suranah
| 0
|
deezer/spleeter
|
deep-learning
| 454
|
How to get model to train on GPU instead of CPU?
|
Possibly an easy to explain guide? I'm new to python
|
closed
|
2020-07-12T04:14:23Z
|
2021-02-12T14:44:01Z
|
https://github.com/deezer/spleeter/issues/454
|
[
"question",
"wontfix",
"inactive"
] |
Waffled-II
| 2
|
ranaroussi/yfinance
|
pandas
| 2,311
|
Download Columns No Longer in OHLC Order
|
### Describe bug
After I upgraded from 0.2.42 to 0.2.54 the order of the dataframe columns changes on some downloads.
### Simple code that reproduces problem
This object definition is stored in the file *myYahoo.py* in my *site-packages* folder.
```
import os
import pandas as pd
import json
import datetime as dt
import yfinance as yf
import time
import copy
class Yahoo:
def __init__(self):
def download_single_price_history(self,symbol):
new_data_df = yf.download(symbol, period = 'max', progress = False, timeout=30.0, multi_level_index=False)
file_path = self.price_hist_path + symbol + '.csv'
new_data_df.to_csv(file_path, index=True) # Save price history to file
```
The symbol is set before passing it to the method using the following code.
```
from myYahoo import Yahoo
yahoo = Yahoo()
symbol = 'HD'
yahoo.download_single_price_history(symbol)
```
### Debug log
None
### Bad data proof
This is the occasional resulting strange order of the columns in the dataframe:
Date, Adj Close, Close, High, Low, Open, Volume
### yfinance version
0.2.54
### Pandas version
Pandas 2.2.3
### Python version
Python 3.10.5
### Operating system
MacOS 15.3.1
Thank you very much for yfinance. I deeply appreciate it and have used it for years.
|
closed
|
2025-02-21T23:58:45Z
|
2025-02-26T20:07:31Z
|
https://github.com/ranaroussi/yfinance/issues/2311
|
[] |
Oneingrate
| 2
|
plotly/dash
|
jupyter
| 2,793
|
[BUG] title option in
|
Thank you so much for helping improve the quality of Dash!
We do our best to catch bugs during the release process, but we rely on your help to find the ones that slip through.
**Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
- replace the result of `pip list | grep dash` below
```
dash 0.42.0
dash-core-components 0.47.0
dash-html-components 0.16.0
dash-renderer 0.23.0
dash-table 3.6.0
```
- if frontend related, tell us your Browser, Version and OS
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Describe the bug**
A clear and concise description of what the bug is.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots or screen recording to help explain your problem.
|
closed
|
2024-03-13T08:16:31Z
|
2024-03-13T09:13:17Z
|
https://github.com/plotly/dash/issues/2793
|
[] |
PatSev
| 0
|
dnouri/nolearn
|
scikit-learn
| 308
|
OSError: could not read bytes when trying to fetch mldata
|
I tried
`dataset = fetch_mldata('MNIST original', data_home='/Users/my_name/Virtualenv/virt1/lib/python3.5/site-packages/sklearn/datasets/')`
and
`dataset = fetch_mldata('MNIST original')
`
I get the following error, which states : OSError: could not read bytes when trying to fetch mldata.
Here is the entire Traceback.
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/my_name/Virtualenv/virt1/lib/python3.5/site-packages/sklearn/datasets/mldata.py", line 158, in fetch_mldata
matlab_dict = io.loadmat(matlab_file, struct_as_record=True)
File "/Users/my_name/Virtualenv/virt1/lib/python3.5/site-packages/scipy/io/matlab/mio.py", line 136, in loadmat
matfile_dict = MR.get_variables(variable_names)
File "/Users/my_name/Virtualenv/virt1/lib/python3.5/site-packages/scipy/io/matlab/mio5.py", line 292, in get_variables
res = self.read_var_array(hdr, process)
File "/Users/my_name/Virtualenv/virt1/lib/python3.5/site-packages/scipy/io/matlab/mio5.py", line 252, in read_var_array
return self._matrix_reader.array_from_header(header, process)
File "scipy/io/matlab/mio5_utils.pyx", line 673, in scipy.io.matlab.mio5_utils.VarReader5.array_from_header (/private/var/folders/gw/_2jq29095y7b__wtby9dg_5h0000gn/T/pip-rx828oer-build/scipy/io/matlab/mio5_utils.c:7119)
File "scipy/io/matlab/mio5_utils.pyx", line 703, in scipy.io.matlab.mio5_utils.VarReader5.array_from_header (/private/var/folders/gw/_2jq29095y7b__wtby9dg_5h0000gn/T/pip-rx828oer-build/scipy/io/matlab/mio5_utils.c:6244)
File "scipy/io/matlab/mio5_utils.pyx", line 776, in scipy.io.matlab.mio5_utils.VarReader5.read_real_complex (/private/var/folders/gw/_2jq29095y7b__wtby9dg_5h0000gn/T/pip-rx828oer-build/scipy/io/matlab/mio5_utils.c:7572)
File "scipy/io/matlab/mio5_utils.pyx", line 448, in scipy.io.matlab.mio5_utils.VarReader5.read_numeric (/private/var/folders/gw/_2jq29095y7b__wtby9dg_5h0000gn/T/pip-rx828oer-build/scipy/io/matlab/mio5_utils.c:4323)
File "scipy/io/matlab/mio5_utils.pyx", line 353, in scipy.io.matlab.mio5_utils.VarReader5.read_element (/private/var/folders/gw/_2jq29095y7b__wtby9dg_5h0000gn/T/pip-rx828oer-build/scipy/io/matlab/mio5_utils.c:3913)
File "scipy/io/matlab/streams.pyx", line 92, in scipy.io.matlab.streams.GenericStream.read_string (/private/var/folders/gw/_2jq29095y7b__wtby9dg_5h0000gn/T/pip-rx828oer-build/scipy/io/matlab/streams.c:2182)
File "scipy/io/matlab/streams.pyx", line 79, in scipy.io.matlab.streams.GenericStream.read_into (/private/var/folders/gw/_2jq29095y7b__wtby9dg_5h0000gn/T/pip-rx828oer-build/scipy/io/matlab/streams.c:1977)
OSError: could not read bytes
```
|
closed
|
2016-11-03T12:44:52Z
|
2018-04-14T06:24:57Z
|
https://github.com/dnouri/nolearn/issues/308
|
[] |
RohanVB
| 2
|
awesto/django-shop
|
django
| 851
|
Sending Customer email verification
|
Ive hacked, but the guest user feature make it Illogical to have customer verify their emails, apart for adding a reCAPTCHA, what is the best way to not have robots and or fake users? please let me know.
|
closed
|
2021-04-06T17:02:26Z
|
2021-09-08T01:39:01Z
|
https://github.com/awesto/django-shop/issues/851
|
[] |
SudoStain
| 1
|
plotly/dash
|
flask
| 3,200
|
Dev UI Dash 3
|
In Dash 3, dev UI:
- cannot be minimized
- is a much larger footprint
Please migrate this position to being a footer vs positioned on top of the app.
@gvwilson
|
closed
|
2025-03-06T22:30:55Z
|
2025-03-13T20:56:19Z
|
https://github.com/plotly/dash/issues/3200
|
[
"feature",
"P2"
] |
BSd3v
| 2
|
automl/auto-sklearn
|
scikit-learn
| 886
|
instantiate sklearn model or get_params from Configuration object
|
I was looking around in the code for a way to instantiate an sklearn model based on a `Configuration` object. My use-case is that I try to implement a generalized way of getting standard metadata about a completed autosklearn run. Eg, I call `autosklearn_model.get_models_with_weights()` and that ends up containing some `Configuration` objects. But these may for example just describe an sklearn model, although I understand it is possible to extend and register other model types as well. In either case I guess I would like access to an instance of the model with matching configuration, so that I can try calling `get_params()` on it to see if it supports that interface. Maybe this is simply accessible somewhere else, but my idea was to re-instantiate a dummy model based on the `Configuration`, and then do this calling of `get_params()`. Ideal would be that I could dynamically instantiate whatever model is described by the `__choice__` (even if it's not sklearn), according to however autosklearn does it internally.
I was poking around in the code and found eg https://github.com/automl/auto-sklearn/blob/bb8396b3dbe2e61cfdf65b5fbd9793b1d2d3dffc/autosklearn/pipeline/components/classification/random_forest.py#L46
I was expecting to find though some place where this import is dynamically selected based on `choice`, but maybe this is just a wrapper class and is itself chosen dynamically?
Then bits like this:
https://github.com/automl/auto-sklearn/blob/bb8396b3dbe2e61cfdf65b5fbd9793b1d2d3dffc/test/test_pipeline/components/classification/test_base.py#L279-L283
Could you point me in the right direction? Or advise if I am missing some fundamental point about how this should work.
To make sure I am clear above I'll also include an example. I have a `config` object like this:
```
Configuration:
balancing:strategy, Value: 'none'
classifier:__choice__, Value: 'random_forest'
classifier:random_forest:bootstrap, Value: 'True'
classifier:random_forest:criterion, Value: 'gini'
classifier:random_forest:max_depth, Constant: 'None'
classifier:random_forest:max_features, Value: 0.5
classifier:random_forest:max_leaf_nodes, Constant: 'None'
classifier:random_forest:min_impurity_decrease, Constant: 0.0
classifier:random_forest:min_samples_leaf, Value: 1
classifier:random_forest:min_samples_split, Value: 2
classifier:random_forest:min_weight_fraction_leaf, Constant: 0.0
data_preprocessing:categorical_transformer:categorical_encoding:__choice__, Value: 'one_hot_encoding'
data_preprocessing:categorical_transformer:category_coalescence:__choice__, Value: 'minority_coalescer'
data_preprocessing:categorical_transformer:category_coalescence:minority_coalescer:minimum_fraction, Value: 0.01
data_preprocessing:numerical_transformer:imputation:strategy, Value: 'mean'
data_preprocessing:numerical_transformer:rescaling:__choice__, Value: 'standardize'
feature_preprocessor:__choice__, Value: 'no_preprocessing'
```
And I want something that produces the equivalent of this (but without hard-coding the model choice and removing parameters that sklearn doesn't like etc):
```python
hps = {hp_name.rsplit(':')[-1]: config[hp_name] for hp_name in config if config[hp_name] is not None}
from sklearn.ensemble import RandomForestClassifier
hps = {k:v for k,v in hps.items() if k not in ['strategy', '__choice__', 'minimum_fraction']}
return to_mls_sklearn(RandomForestClassifier(**hps))
```
|
closed
|
2020-06-27T06:40:22Z
|
2021-04-13T08:44:47Z
|
https://github.com/automl/auto-sklearn/issues/886
|
[
"enhancement"
] |
chrisbarber
| 2
|
Anjok07/ultimatevocalremovergui
|
pytorch
| 859
|
Last Error Received:
|
Last Error Received:
Process: VR Architecture
If this error persists, please contact the developers with the error details.
Raw Error Details:
MemoryError: "Unable to allocate 2.97 GiB for an array with shape (2, 769, 259296) and data type complex64"
Traceback Error: "
File "UVR.py", line 6565, in process_start
File "separate.py", line 1034, in seperate
File "separate.py", line 1175, in inference_vr
File "separate.py", line 1152, in postprocess
"
Error Time Stamp [2023-10-05 09:15:46]
Full Application Settings:
vr_model: 3_HP-Vocal-UVR
aggression_setting: 10
window_size: 512
mdx_segment_size: 256
|
open
|
2023-10-05T01:19:37Z
|
2023-10-05T01:19:37Z
|
https://github.com/Anjok07/ultimatevocalremovergui/issues/859
|
[] |
huawang761
| 0
|
plotly/jupyterlab-dash
|
dash
| 1
|
Add support for Windows
|
This looks great, but I see that Windows isn't currently supported. It would be lovely to have this extension available to us
|
open
|
2018-12-06T22:45:53Z
|
2020-05-20T00:13:31Z
|
https://github.com/plotly/jupyterlab-dash/issues/1
|
[
"enhancement"
] |
mungojam
| 10
|
JaidedAI/EasyOCR
|
pytorch
| 415
|
ๆ่่ฏญ่ฏๅซๅ๏ผ
|
closed
|
2021-04-12T08:54:26Z
|
2022-03-02T09:24:57Z
|
https://github.com/JaidedAI/EasyOCR/issues/415
|
[] |
ChChwang
| 1
|
|
PokeAPI/pokeapi
|
graphql
| 355
|
Site portal to static site
|
> What about converting the site portal to a static site? Right now it uses Django templates. I can work on that next.
From #350. Templates: https://github.com/PokeAPI/pokeapi/tree/master/templates
The 404 page might be useful as well: https://www.netlify.com/docs/redirects/#custom-404
Requirement: since we're using Netlify's Open Source plan, we need to give them proper attribution. An inclusion of their [logo[(https://www.netlify.com/press/) should do the trick, or maybe a "hosted by Netlify" statement in the footer.
|
closed
|
2018-09-07T23:39:08Z
|
2018-09-23T00:56:49Z
|
https://github.com/PokeAPI/pokeapi/issues/355
|
[] |
neverendingqs
| 23
|
yzhao062/pyod
|
data-science
| 1
|
load_cardio() and load_letter() do not work under Ubuntu 14.04
|
While running comb_example.py, the program may fail due to loadmat() function.
A quick workaround is to use synthesized data instead of real-world datasets.
This only affects comb_example.py. Will be addressed in the next release.
|
closed
|
2018-05-21T18:56:21Z
|
2018-06-05T19:52:12Z
|
https://github.com/yzhao062/pyod/issues/1
|
[
"good first issue"
] |
yzhao062
| 1
|
Farama-Foundation/Gymnasium
|
api
| 472
|
[Question] AttributeError: 'NoneType' object has no attribute 'glfwGetCurrentContext'
|
### Question
Hi!
I'm learning how to use `gymnasium` and encounter the following error:
```
Exception ignored in: <function WindowViewer.__del__ at 0x7effa4dad560>
Traceback (most recent call last):
File "/home/rlc/mambaforge/envs/tianshou/lib/python3.7/site-packages/gymnasium/envs/mujoco/mujoco_rendering.py", line 335, in __del__
File "/home/rlc/mambaforge/envs/tianshou/lib/python3.7/site-packages/gymnasium/envs/mujoco/mujoco_rendering.py", line 328, in free
File "/home/rlc/mambaforge/envs/tianshou/lib/python3.7/site-packages/glfw/__init__.py", line 2366, in get_current_context
AttributeError: 'NoneType' object has no attribute 'glfwGetCurrentContext'
```
The Mujoco window flashes and then disappears. Here is the minimum code snippet to reproduce the error
```
import gymnasium as gym
env=gym.make('Humanoid-v4', render_mode='human')
obs=env.reset()
env.render()
```
```
Gymnasium: 0.28.1
glfw: 2.5.9
```
Thanks!
|
closed
|
2023-04-28T03:16:41Z
|
2023-04-28T06:38:56Z
|
https://github.com/Farama-Foundation/Gymnasium/issues/472
|
[
"question"
] |
zichunxx
| 2
|
aminalaee/sqladmin
|
sqlalchemy
| 459
|
Split column_searchable_list into differente fields on frontend.
|
### Discussed in https://github.com/aminalaee/sqladmin/discussions/457
<div type='discussions-op-text'>
<sup>Originally posted by **iagobalmeida** March 31, 2023</sup>
Hey there, is there any out of the box way of spliting the different searchable columns into different fields in the frontend?
If not, what class should I be extending to be able to change the way the search query is interpreted?</div>
|
open
|
2023-03-31T14:56:23Z
|
2023-03-31T14:56:53Z
|
https://github.com/aminalaee/sqladmin/issues/459
|
[
"UI"
] |
aminalaee
| 0
|
yt-dlp/yt-dlp
|
python
| 11,867
|
Youtube VR player client "android_vr" does not work with cookies to bypass age gate
|
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
_No response_
### Provide a description that is worded well enough to be understood
NOTE: I believe this issue is separate from issues 9633 and 9903.
I am using player_client setting of "android_vr" to download a youtube 8k VR video. The android_vr player client is required to download the 8K versions of VR videos (format 571). However, the android_vr download does not work when used in conjunction with a cookies file to bypass the age gate of restricted videos. The cookies file was from an account/IP that is verified to be able to view the video in the browser, and the same cookies file works fine when downloading the same video with the default player_client (but of course the default client can only download the low-res version).
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '--cookies', 'cookies.txt', '--extractor-args', 'youtube:player_client=android_vr', 'https://www.youtube.com/watch?v=txNn-QZDz6Q']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version master@2024.12.15.202031 from yt-dlp/yt-dlp-master-builds [d298693b1] (zip)
[debug] Python 3.9.16 (CPython x86_64 64bit) - CYGWIN_NT-10.0-26100-3.5.0-1.x86_64-x86_64-64bit-WindowsPE (OpenSSL 1.1.1w 11 Sep 2023)
[debug] exe versions: ffmpeg 2024-08-11-git-43cde54fc1-full_build-www.gyan.dev (setts), ffprobe 2024-08-11-git-43cde54fc1-full_build-www.gyan.dev, phantomjs 1.9.2
[debug] Optional libraries: Crypto-broken 2.6.1, brotli-1.1.0, certifi-2021.10.08, mutagen-1.45.1, requests-2.31.0, sqlite3-3.34.0, urllib3-1.26.7
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-master-builds/releases/latest
Latest version: master@2024.12.15.202031 from yt-dlp/yt-dlp-master-builds
yt-dlp is up to date (master@2024.12.15.202031 from yt-dlp/yt-dlp-master-builds)
[youtube] Extracting URL: https://www.youtube.com/watch?v=txNn-QZDz6Q
[youtube] txNn-QZDz6Q: Downloading webpage
[debug] [youtube] Extracted SAPISID cookie
[youtube] txNn-QZDz6Q: Downloading android vr player API JSON
WARNING: [youtube] YouTube said: ERROR - Request contains an invalid argument.
WARNING: [youtube] HTTP Error 400: Bad Request. Retrying (1/3)...
[youtube] txNn-QZDz6Q: Downloading android vr player API JSON
WARNING: [youtube] YouTube said: ERROR - Request contains an invalid argument.
WARNING: [youtube] HTTP Error 400: Bad Request. Retrying (2/3)...
[youtube] txNn-QZDz6Q: Downloading android vr player API JSON
WARNING: [youtube] YouTube said: ERROR - Request contains an invalid argument.
WARNING: [youtube] HTTP Error 400: Bad Request. Retrying (3/3)...
[youtube] txNn-QZDz6Q: Downloading android vr player API JSON
WARNING: [youtube] YouTube said: ERROR - Request contains an invalid argument.
WARNING: [youtube] Unable to download API page: HTTP Error 400: Bad Request (caused by <HTTPError 400: Bad Request>)
WARNING: Only images are available for download. use --list-formats to see them
[debug] Sort order given by extractor: quality, res, fps, hdr:12, source, vcodec, channels, acodec, lang, proto
[debug] Formats sorted by: hasvid, ie_pref, quality, res, fps, hdr:12(7), source, vcodec, channels, acodec, lang, proto, size, br, asr, vext, aext, hasaud, id
[debug] Default format spec: bestvideo*+bestaudio/best
ERROR: [youtube] txNn-QZDz6Q: Requested format is not available. Use --list-formats for a list of available formats
Traceback (most recent call last):
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1624, in wrapper
return func(self, *args, **kwargs)
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1780, in __extract_info
return self.process_ie_result(ie_result, download, extra_info)
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1839, in process_ie_result
ie_result = self.process_video_result(ie_result, download=download)
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 2973, in process_video_result
raise ExtractorError(
yt_dlp.utils.ExtractorError: [youtube] txNn-QZDz6Q: Requested format is not available. Use --list-formats for a list of available formats
```
|
closed
|
2024-12-21T19:56:28Z
|
2024-12-23T23:26:36Z
|
https://github.com/yt-dlp/yt-dlp/issues/11867
|
[
"site-bug",
"site:youtube"
] |
ronaldeustace
| 7
|
geopandas/geopandas
|
pandas
| 2,450
|
QST: How to select individuals columns using GeoPandas
|
- [x ] I have searched the [geopandas] tag on [StackOverflow](https://stackoverflow.com/questions/tagged/geopandas) and [GIS StackExchange](https://gis.stackexchange.com/questions/tagged/geopandas) for similar questions.
- [ x] I have asked my usage related question on [StackOverflow](https://stackoverflow.com) or [GIS StackExhange](https://gis.stackexchange.com).
---
#### Question about geopandas
I'm still new to GeoPandas. But I am trying to output a list of localities that have changes within them, and group them by locality and month . The PostGIS query is included within this function:
```
def testRead():
sql = """SELECT TO_CHAR(b.dateofchange, 'Month') as month, a.text, count(a.text)
from localities as a
join changes as b
on ST_WITHIN(b.changelocation, a.geom)
GROUP BY a.text, TO_CHAR(b.dateofchange, 'Month')
order by month desc;"""
gdf = gpd.read_postgis(sql, con=engine )
return gdf
```
Following the docs, I attempted to use this gpd.read_postgis() function, but I keep getting this error:
_ValueError: Query missing geometry column 'geom'_
Does this mean that I have to have the geom in the SELECT? But this would require the geom to also be in the GROUP BY, and that would defeat the purpose of the query. How can I go about doing this please?
|
closed
|
2022-06-04T14:44:31Z
|
2022-06-05T07:17:58Z
|
https://github.com/geopandas/geopandas/issues/2450
|
[
"question"
] |
martina-dev-11
| 2
|
newpanjing/simpleui
|
django
| 328
|
ๅๅฐ่ๅๅฝ้
ๅ
|
**ไฝ ๅธๆๅขๅ ไปไนๅ่ฝ๏ผ**
** what features do you wish to add? * *
1.ๅขๅ ๅๅฐ่ๅ็ๅฝ้
ๅ
**็ไธไฝ ็่็ณปๆนๅผ๏ผไปฅไพฟไธไฝ ๅๅพ่็ณป**
** what would you like to see optimized? * *
QQ๏ผ1404711457
E-mail๏ผ1404711457@qq.com
|
closed
|
2020-12-15T06:46:30Z
|
2021-02-05T06:32:49Z
|
https://github.com/newpanjing/simpleui/issues/328
|
[
"enhancement"
] |
Aiminsun
| 1
|
AutoGPTQ/AutoGPTQ
|
nlp
| 504
|
[BUG]RuntimeError: The temp_state buffer is too small in the exllama backend for GPTQ with act-order.
|
**Describe the bug**
RuntimeError: The temp_state buffer is too small in the exllama backend for GPTQ with act-order. Please call the exllama_set_max_input_length function to increase the buffer size for a sequence length >=2960:
from auto_gptq import exllama_set_max_input_length
model = exllama_set_max_input_length(model, max_input_length=2960)
**Hardware details**
V100
**Software version**
torch 2.1.2+cu121
auto-gptq 0.6.0
optimum 1.16.1
**To Reproduce**
use https://githubfast.com/hiyouga/LLaMA-Factory qlora finetune TheBloke/SUS-Chat-34B-GPTQ
**Screenshots**

|
open
|
2024-01-04T06:22:45Z
|
2024-01-04T06:22:45Z
|
https://github.com/AutoGPTQ/AutoGPTQ/issues/504
|
[
"bug"
] |
Essence9999
| 0
|
keras-team/autokeras
|
tensorflow
| 971
|
How can the user provide custom ranges for ConvBlock kernel_size
|
Hi
I tried to use ConvBlock by giving it a list of custom kernel sizes, using the parameter kernel_size=[7, 9, 13] for example. It throws an error.
I observed that inside the ConvBlock class, the hp.Choice('kernel_size', [x, y, z], default=x) is used.
How can the user provide custom kernel sizes from his code when adding a convolutional layer? Please provide sample code for this, thanks!
|
closed
|
2020-02-15T10:54:32Z
|
2020-04-23T05:38:24Z
|
https://github.com/keras-team/autokeras/issues/971
|
[
"wontfix"
] |
picsag
| 3
|
psf/requests
|
python
| 6,233
|
Can not send json and files same as data and files
|
Some scenarios with requests can not be done straightforward.
For example when I want to send json and files in multipart form, I use `data` and `files` parameters in requests method.
But if data has some `None` values, it will be dropped from request body. So for this problem I use `json` parameter instead of `data`.
Now with `json` parameter, `files` parameter will overwrite request body and json body will be removed from request.
## Expected Result
I want to send json and files with multipart form request, while json body has some None values.
## Actual Result
This expectation can not be done straightforward. Some solutions are messy and will change request body to another schema. like [this](https://stackoverflow.com/a/35946962/11926259)
## Reproduction Steps
```python
import requests
data = {"key": None}
requests.patch(url, json=data, files=files_data)
```
## System Information
$ python -m requests.help
```
{
"chardet": {
"version": "4.0.0"
},
"cryptography": {
"version": "3.4.8"
},
"idna": {
"version": "3.3"
},
"implementation": {
"name": "CPython",
"version": "3.10.4"
},
"platform": {
"release": "5.15.0-47-generic",
"system": "Linux"
},
"pyOpenSSL": {
"openssl_version": "30000020",
"version": "21.0.0"
},
"requests": {
"version": "2.25.1"
},
"system_ssl": {
"version": "30000020"
},
"urllib3": {
"version": "1.26.5"
},
"using_pyopenssl": true
}
```
|
closed
|
2022-09-07T06:18:50Z
|
2023-09-08T00:03:08Z
|
https://github.com/psf/requests/issues/6233
|
[] |
sssanikhani
| 1
|
LAION-AI/Open-Assistant
|
python
| 3,082
|
Make Markdown'ed links more noticeable
|
(Both for Chat and Tasks) For now, you can notice them only by bold text. I suggest adding colors or underline them. For GH it's: [Google](https://google.com/)

|
closed
|
2023-05-08T00:27:32Z
|
2023-05-24T12:22:13Z
|
https://github.com/LAION-AI/Open-Assistant/issues/3082
|
[
"website",
"UI/UX"
] |
echo0x22
| 6
|
CorentinJ/Real-Time-Voice-Cloning
|
deep-learning
| 1,117
|
pip install -r requirements.txt, ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'
|
This is my first time using this software or coding. Please speak in simple terms.
(voice-software) PS C:\windows> cd..
(voice-software) PS C:\> cd Users
(voice-software) PS C:\Users> cd adria
(voice-software) PS C:\Users\adria> cd OneDrive
(voice-software) PS C:\Users\adria\OneDrive> cd Desktop
(voice-software) PS C:\Users\adria\OneDrive\Desktop> cd voice-clone
(voice-software) PS C:\Users\adria\OneDrive\Desktop\voice-clone> pip install -r requirements.txt
ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'
(voice-software) PS C:\Users\adria\OneDrive\Desktop\voice-clone> pip install -r requirements.txt
ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'
(voice-software) PS C:\Users\adria\OneDrive\Desktop\voice-clone>
So what have I done wrong here? What is the fix?
|
closed
|
2022-09-25T03:30:36Z
|
2023-01-08T08:55:12Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1117
|
[] |
Atlas5779
| 3
|
custom-components/pyscript
|
jupyter
| 54
|
Feature request: debounce decorator
|
Sometimes you will want to limit triggers to a specific frequency. Therefore I would like to suggest a decorator that provides this functionality.
It would be similar to the existing @time_active decorator, only it would check the time elapsed since the last time a task was called. If that time exceeds a provided threshold, the task is called. Otherwise it is skipped.
Something like this:
```python
@state_trigger('some condition')
@debounce(millis=300)
def some_task():
pass
```
Meaning that the task will only be exceuted if the last execution was more than 300ms ago.
I tried implementing this myself, only to find out that pyscript doesn't support decorators.
|
closed
|
2020-10-24T10:00:06Z
|
2020-10-26T02:30:35Z
|
https://github.com/custom-components/pyscript/issues/54
|
[] |
phha
| 8
|
miguelgrinberg/Flask-Migrate
|
flask
| 341
|
Error on flask db upgrade
|
Hi! I'm trying to implement flask_migrate on my project. I use app factory pattern and followed the instructions from the blog. After setting "FLASK_APP=run.py", `flask db init` works fine and creates every file as expected. I create a test model, run `flask db migrate`, and the script is generated just fine. But when I run flask db upgrade, I get an error:
> (venv) [guillermo@arch softalq]$ flask db upgrade
> /home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/flask_sqlalchemy/__init__.py:835: FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning.
> 'SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and '
> INFO [alembic.runtime.migration] Context impl MySQLImpl.
> INFO [alembic.runtime.migration] Will assume non-transactional DDL.
> INFO [alembic.runtime.migration] Running upgrade -> cc85dbd830d7, empty message
> Traceback (most recent call last):
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1284, in _execute_context
> cursor, statement, parameters, context
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 590, in do_execute
> cursor.execute(statement, parameters)
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/pymysql/cursors.py", line 170, in execute
> result = self._query(query)
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/pymysql/cursors.py", line 328, in _query
> conn.query(q)
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/pymysql/connections.py", line 517, in query
> self._affected_rows = self._read_query_result(unbuffered=unbuffered)
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/pymysql/connections.py", line 732, in _read_query_result
> result.read()
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/pymysql/connections.py", line 1075, in read
> first_packet = self.connection._read_packet()
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/pymysql/connections.py", line 657, in _read_packet
> packet_header = self._read_bytes(4)
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/pymysql/connections.py", line 707, in _read_bytes
> CR.CR_SERVER_LOST, "Lost connection to MySQL server during query")
> pymysql.err.OperationalError: (2013, 'Lost connection to MySQL server during query')
>
> The above exception was the direct cause of the following exception:
>
> Traceback (most recent call last):
> File "/home/guillermo/PycharmProjects/softalq/venv/bin/flask", line 10, in <module>
> sys.exit(main())
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/flask/cli.py", line 967, in main
> cli.main(args=sys.argv[1:], prog_name="python -m flask" if as_module else None)
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/flask/cli.py", line 586, in main
> return super(FlaskGroup, self).main(*args, **kwargs)
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/click/core.py", line 782, in main
> rv = self.invoke(ctx)
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/click/core.py", line 1259, in invoke
> return _process_result(sub_ctx.command.invoke(sub_ctx))
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/click/core.py", line 1259, in invoke
> return _process_result(sub_ctx.command.invoke(sub_ctx))
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/click/core.py", line 1066, in invoke
> return ctx.invoke(self.callback, **ctx.params)
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/click/core.py", line 610, in invoke
> return callback(*args, **kwargs)
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/click/decorators.py", line 21, in new_func
> return f(get_current_context(), *args, **kwargs)
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/flask/cli.py", line 426, in decorator
> return __ctx.invoke(f, *args, **kwargs)
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/click/core.py", line 610, in invoke
> return callback(*args, **kwargs)
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/flask_migrate/cli.py", line 134, in upgrade
> _upgrade(directory, revision, sql, tag, x_arg)
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/flask_migrate/__init__.py", line 96, in wrapped
> f(*args, **kwargs)
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/flask_migrate/__init__.py", line 271, in upgrade
> command.upgrade(config, revision, sql=sql, tag=tag)
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/alembic/command.py", line 298, in upgrade
> script.run_env()
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/alembic/script/base.py", line 489, in run_env
> util.load_python_file(self.dir, "env.py")
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/alembic/util/pyfiles.py", line 98, in load_python_file
> module = load_module_py(module_id, path)
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/alembic/util/compat.py", line 184, in load_module_py
> spec.loader.exec_module(module)
> File "<frozen importlib._bootstrap_external>", line 728, in exec_module
> File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
> File "migrations/env.py", line 96, in <module>
> run_migrations_online()
> File "migrations/env.py", line 90, in run_migrations_online
> context.run_migrations()
> File "<string>", line 8, in run_migrations
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/alembic/runtime/environment.py", line 846, in run_migrations
> self.get_context().run_migrations(**kw)
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/alembic/runtime/migration.py", line 520, in run_migrations
> step.migration_fn(**kw)
> File "/home/guillermo/PycharmProjects/softalq/migrations/versions/cc85dbd830d7_.py", line 21, in upgrade
> op.drop_column('datos', 'test')
> File "<string>", line 8, in drop_column
> File "<string>", line 3, in drop_column
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/alembic/operations/ops.py", line 2049, in drop_column
> return operations.invoke(op)
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/alembic/operations/base.py", line 374, in invoke
> return fn(self, operation)
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/alembic/operations/toimpl.py", line 81, in drop_column
> operation.table_name, column, schema=operation.schema, **operation.kw
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/alembic/ddl/impl.py", line 240, in drop_column
> self._exec(base.DropColumn(table_name, column, schema=schema))
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/alembic/ddl/impl.py", line 140, in _exec
> return conn.execute(construct, *multiparams, **params)
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1020, in execute
> return meth(self, multiparams, params)
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 72, in _execute_on_connection
> return connection._execute_ddl(self, multiparams, params)
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1082, in _execute_ddl
> compiled,
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1324, in _execute_context
> e, statement, parameters, cursor, context
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1518, in _handle_dbapi_exception
> sqlalchemy_exception, with_traceback=exc_info[2], from_=e
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 178, in raise_
> raise exception
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1284, in _execute_context
> cursor, statement, parameters, context
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 590, in do_execute
> cursor.execute(statement, parameters)
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/pymysql/cursors.py", line 170, in execute
> result = self._query(query)
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/pymysql/cursors.py", line 328, in _query
> conn.query(q)
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/pymysql/connections.py", line 517, in query
> self._affected_rows = self._read_query_result(unbuffered=unbuffered)
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/pymysql/connections.py", line 732, in _read_query_result
> result.read()
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/pymysql/connections.py", line 1075, in read
> first_packet = self.connection._read_packet()
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/pymysql/connections.py", line 657, in _read_packet
> packet_header = self._read_bytes(4)
> File "/home/guillermo/PycharmProjects/softalq/venv/lib/python3.7/site-packages/pymysql/connections.py", line 707, in _read_bytes
> CR.CR_SERVER_LOST, "Lost connection to MySQL server during query")
> sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query')
> [SQL: ALTER TABLE datos DROP COLUMN test]
> (Background on this error at: http://sqlalche.me/e/e3q8)
The changes are reflected to the db, but if I make another change in `models.py` and try to do another `flask db migrate`, it reports the db is not up to date. I'm using Mariadb on Arch Linux, latest SQLAlchemy and Flask_migrate with Python 3.7. I hope I've given enough info.
Cheers
|
closed
|
2020-05-17T22:28:57Z
|
2020-10-09T19:03:05Z
|
https://github.com/miguelgrinberg/Flask-Migrate/issues/341
|
[
"question"
] |
guillermohs9
| 9
|
thp/urlwatch
|
automation
| 48
|
UnicodeEncodeError: 'ascii' codec can't encode character '\xfc' in position 6
|
The following exception happens, when the return value from a shell script contains umlaute (รค, รถ, รผ):
```
Traceback (most recent call last):
File "/usr/bin/urlwatch", line 376, in <module>
main(parser.parse_args())
File "/usr/bin/urlwatch", line 343, in main
report.finish()
File "/usr/lib/python3.5/site-packages/urlwatch/handler.py", line 128, in finish
ReporterBase.submit_all(self, self.job_states, duration)
File "/usr/lib/python3.5/site-packages/urlwatch/reporters.py", line 81, in submit_all
cls(report, cfg, job_states, duration).submit()
File "/usr/lib/python3.5/site-packages/urlwatch/reporters.py", line 306, in submit
print(line)
UnicodeEncodeError: 'ascii' codec can't encode character '\xfc' in position 6: ordinal not in range(128)
```
|
closed
|
2016-02-10T15:23:20Z
|
2016-02-16T15:13:02Z
|
https://github.com/thp/urlwatch/issues/48
|
[] |
marbon87
| 14
|
davidsandberg/facenet
|
computer-vision
| 1,206
|
Training over a pretrained model
|
I was using pretrained model for face recognition, and wanted to use in an attendance system but turns out that model give some wrong predictions of similar person. I tried to train that pretrained model with my dataset. My dataset is around 4500 images including 30 people. I want to know is my approach appropriate to train over pretrained model and what hyperparameters I should choose, as I choose adam optimizer and the triplet loss started to increase in the very next iteration. Plz help....
|
open
|
2021-08-16T05:09:13Z
|
2023-04-11T14:16:33Z
|
https://github.com/davidsandberg/facenet/issues/1206
|
[] |
RAJA-PARIKSHAT
| 4
|
CTFd/CTFd
|
flask
| 1,889
|
Easier to change user mode
|
We need to make it easier to change between user and teams mode. If anything we should simply detect that the user wants to switch and then tell them of the downsides of it. It should be easy to do from Users -> Teams but the opposite would need to be a lossy conversion.
|
closed
|
2021-05-12T19:02:02Z
|
2021-07-23T19:07:00Z
|
https://github.com/CTFd/CTFd/issues/1889
|
[] |
ColdHeat
| 0
|
encode/httpx
|
asyncio
| 2,453
|
S
|
The starting point for issues should usually be a discussion...
https://github.com/encode/httpx/discussions
Possible bugs may be raised as a "Potential Issue" discussion, feature requests may be raised as an "Ideas" discussion. We can then determine if the discussion needs to be escalated into an "Issue" or not.
This will help us ensure that the "Issues" list properly reflects ongoing or needed work on the project.
---
- [ ] Initially raised as discussion #...
|
closed
|
2022-11-20T10:35:07Z
|
2022-11-21T09:11:06Z
|
https://github.com/encode/httpx/issues/2453
|
[] |
truestorybaby
| 0
|
apify/crawlee-python
|
web-scraping
| 98
|
Simplify argument type `requests`
|
Somewhere we use the following:
```python
requests: list[BaseRequestData | Request]
```
Let's refactor the code to accept only one type.
On the places where we need to use:
```python
arg_name: list[Request | str]
```
Let's use a different identifier than `requests`, e.g. `sources`.
See the following conversation for context - https://github.com/apify/crawlee-py/pull/56#discussion_r1557493986.
|
closed
|
2024-04-09T16:39:41Z
|
2024-06-18T22:12:01Z
|
https://github.com/apify/crawlee-python/issues/98
|
[
"t-tooling",
"debt"
] |
vdusek
| 1
|
RobertCraigie/prisma-client-py
|
pydantic
| 27
|
Add support for native query engine bindings
|
## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Prisma has experimental support for natively binding the query engine to the node client, this reduces the overhead between the client and the rust binary, improving performance.
We should look into whether or not this is feasible for us to do as well.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
Would probably make use of [Py03](https://github.com/PyO3/pyo3) and [Py03-asyncio](https://github.com/awestlake87/pyo3-asyncio) I don't know how feasible this is yet but if this does end up shipping we would have to bundle the rust binary with the package either using wheels or a build extension as providing downloads for rust binaries is not something that would be feasible for me to provide.
Maybe related, should look into cibuildwheel for packaging, see uvloop for an example using GitHub actions
## Status
Moved status to #165
|
open
|
2021-06-21T17:31:26Z
|
2023-02-25T20:10:23Z
|
https://github.com/RobertCraigie/prisma-client-py/issues/27
|
[
"kind/improvement",
"topic: internal",
"topic: perf",
"level/advanced",
"priority/medium"
] |
RobertCraigie
| 4
|
plotly/dash
|
jupyter
| 2,220
|
[BUG] CI swallowing some errors
|
Specifically in lint-unit - for example [this run](https://app.circleci.com/pipelines/github/plotly/dash/3978/workflows/6436f15e-3401-4e8b-8dfc-de6d0d1c569a/jobs/65966) has two:
`dash-html-components/scripts/extract-elements.js` (during `npm run build.sequential`) fails with:
```
Error: Unexpected number of elements extracted from https://developer.mozilla.org/en-US/docs/Web/HTML/Element -
Found 123 but expected 125
Check the output and edit expectedElCount if this is intended.
```
and then `black` doesn't run:
```
> private::lint.black
> if [[ ${PYVERSION:-python39} != python36 ]]; then black dash tests --exclude metadata_test.py --check; fi
sh: 1: [[: not found
```
Why are these not resulting in the run failing, and how can we ensure all such errors DO fail the run?
|
closed
|
2022-09-06T21:31:15Z
|
2023-04-21T14:46:06Z
|
https://github.com/plotly/dash/issues/2220
|
[] |
alexcjohnson
| 0
|
pywinauto/pywinauto
|
automation
| 968
|
Get Caret position for the current element
|
Here is [how](https://stackoverflow.com/questions/61368790/how-can-i-get-the-caret-position-from-a-textbox-in-another-application-not-the) to grab caret position for the current element using UIautomation in C++ or C#
So in order for this work were going to need access to `IUIAutomationTextPattern2` As mentioned in gitter:
> We will need to implement property iface_text2, not only extending _build_pattern_ids_dic
|
open
|
2020-08-08T20:12:27Z
|
2023-11-24T23:09:08Z
|
https://github.com/pywinauto/pywinauto/issues/968
|
[] |
LexiconCode
| 4
|
ydataai/ydata-profiling
|
data-science
| 1,376
|
Further analysis
|
### Current Behaviour
say that I think the data type detected by ydata-profiling is what I want. So if I want to do more analysis on these data how can I access each variables?
### Expected Behaviour
I want to use other kind of visualization like box plot or scatter plot and so on. So I need that detected data type.
### Data Description
I an using Titanic dataset.
### Code that reproduces the bug
_No response_
### pandas-profiling version
v4.2.0
### Dependencies
```Text
pandas==1.5.3
```
### OS
_No response_
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html).
|
closed
|
2023-07-03T16:17:58Z
|
2023-08-09T15:44:20Z
|
https://github.com/ydataai/ydata-profiling/issues/1376
|
[
"question/discussion โ"
] |
Fatemeh490
| 3
|
PrefectHQ/prefect
|
automation
| 16,600
|
Setting `enforce_parameter_schema: false` in yaml is ignored
|
### Bug summary
Similar to https://github.com/PrefectHQ/prefect/issues/16409 I could have sworn the original PR included a fix for yaml processing too?
Anyway, the cli flag is now fixed, but setting `enforce_parameter_schema: false` has no effect on deploy
### Version info
```Text
Version: 3.1.11
API version: 0.8.4
Python version: 3.12.5
Git commit: e448bd34
Built: Thu, Jan 2, 2025 1:11 PM
OS/Arch: darwin/arm64
Profile: ephemeral
Server type: server
Pydantic version: 2.10.4
```
### Additional context
_No response_
|
closed
|
2025-01-03T21:29:07Z
|
2025-01-07T17:29:05Z
|
https://github.com/PrefectHQ/prefect/issues/16600
|
[
"bug"
] |
ciarancourtney
| 1
|
deeppavlov/DeepPavlov
|
nlp
| 823
|
TF dependency requires specific CPU commands
|
I try:
- build docker Image on Intel Core computer
- use it on Xeon
I get following error:
2019-03-05 19:16:44.991 INFO in 'deeppavlov.core.data.simple_vocab'['simple_vocab'] at line 103: [loading vocabulary from /root/.deeppavlov/models/ner_rus/word.dict]
2019-03-05 19:16:45.81 INFO in 'deeppavlov.core.data.simple_vocab'['simple_vocab'] at line 103: [loading vocabulary from /root/.deeppavlov/models/ner_rus/tag.dict]
2019-03-05 19:16:45.84 INFO in 'deeppavlov.core.data.simple_vocab'['simple_vocab'] at line 103: [loading vocabulary from /root/.deeppavlov/models/ner_rus/char.dict]
2019-03-05 19:16:45.94 INFO in 'deeppavlov.models.embedders.fasttext_embedder'['fasttext_embedder'] at line 52: [loading fastText embeddings from /root/.deeppavlov/downloads/embeddings/lenta_lower_100.bin]
2019-03-05 19:16:49.243475: F tensorflow/core/platform/cpu_feature_guard.cc:37] The TensorFlow library was compiled to use AVX instructions, but these aren't available on your machine.
|
closed
|
2019-04-30T09:05:29Z
|
2019-04-30T09:41:51Z
|
https://github.com/deeppavlov/DeepPavlov/issues/823
|
[] |
bavadim
| 3
|
pennersr/django-allauth
|
django
| 3,106
|
Gitea provider does not work by default with gitea.com
|
It seems the Gitea provider expects a [/user](https://github.com/pennersr/django-allauth/blob/master/allauth/socialaccount/providers/gitea/views.py#L25) endpoint. But this 404's. It appears the Gitea project added a `login/oauth/userinfo` endpoint for this instead. The new endpoint uses "sub" for the ID instead of "id" as well. @techknowlogick it looks like you created the original provider, perhaps you know more about it?
# Reproduction steps
- Register application, https://gitea.com/user/settings/applications/
- Add Social App via Django Admin
- Attempt to log in
|
closed
|
2022-06-02T20:32:19Z
|
2022-06-07T09:13:32Z
|
https://github.com/pennersr/django-allauth/issues/3106
|
[] |
bufke
| 3
|
ipython/ipython
|
data-science
| 14,111
|
AttributeError: module 'psutil' has no attribute 'Process'
|
Hello
I'm getting the following error when trying to run a jupyter notebook
```
[I 11:07:25.643 NotebookApp] KernelRestarter: restarting kernel (2/5), new random ports
Traceback (most recent call last):
File "/home/bob/.pyenv/versions/3.8.16/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/bob/.pyenv/versions/3.8.16/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/bob/git/org/projcet/apps/content/.venv/lib/python3.8/site-packages/ipykernel_launcher.py", line 15, in <module>
from ipykernel import kernelapp as app
File "/home/bob/git/org/projcet/apps/content/.venv/lib/python3.8/site-packages/ipykernel/kernelapp.py", line 52, in <module>
from .ipkernel import IPythonKernel
File "/home/bob/git/org/projcet/apps/content/.venv/lib/python3.8/site-packages/ipykernel/ipkernel.py", line 20, in <module>
from .comm.comm import BaseComm
File "/home/bob/git/org/projcet/apps/content/.venv/lib/python3.8/site-packages/ipykernel/comm/__init__.py", line 3, in <module>
from .comm import Comm
File "/home/bob/git/org/projcet/apps/content/.venv/lib/python3.8/site-packages/ipykernel/comm/comm.py", line 15, in <module>
from ipykernel.kernelbase import Kernel
File "/home/bob/git/org/projcet/apps/content/.venv/lib/python3.8/site-packages/ipykernel/kernelbase.py", line 73, in <module>
class Kernel(SingletonConfigurable):
File "/home/bob/git/org/projcet/apps/content/.venv/lib/python3.8/site-packages/ipykernel/kernelbase.py", line 83, in Kernel
processes: t.Dict[str, psutil.Process] = {}
AttributeError: module 'psutil' has no attribute 'Process'
[I 11:07:28.679 NotebookApp] KernelRestarter: restarting kernel (3/5), new random ports
^C[I 11:07:29.004 NotebookApp] interrupted
Serving notebooks from local directory: /home/bob/git/org/projcet/apps/content
1 active kernel
Jupyter Notebook 6.5.4 is running at:
http://localhost:8888/?token=3c47465075562155f642e3c57364ae243071fb78292120b4
or http://127.0.0.1:8888/?token=3c47465075562155f642e3c57364ae243071fb78292120b4
Shutdown this notebook server (y/[n])? Traceback (most recent call last):
File "/home/bob/.pyenv/versions/3.8.16/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/bob/.pyenv/versions/3.8.16/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/bob/git/org/projcet/apps/content/.venv/lib/python3.8/site-packages/ipykernel_launcher.py", line 15, in <module>
from ipykernel import kernelapp as app
File "/home/bob/git/org/projcet/apps/content/.venv/lib/python3.8/site-packages/ipykernel/kernelapp.py", line 52, in <module>
from .ipkernel import IPythonKernel
File "/home/bob/git/org/projcet/apps/content/.venv/lib/python3.8/site-packages/ipykernel/ipkernel.py", line 20, in <module>
from .comm.comm import BaseComm
File "/home/bob/git/org/projcet/apps/content/.venv/lib/python3.8/site-packages/ipykernel/comm/__init__.py", line 3, in <module>
from .comm import Comm
File "/home/bob/git/org/projcet/apps/content/.venv/lib/python3.8/site-packages/ipykernel/comm/comm.py", line 15, in <module>
from ipykernel.kernelbase import Kernel
File "/home/bob/git/org/projcet/apps/content/.venv/lib/python3.8/site-packages/ipykernel/kernelbase.py", line 73, in <module>
class Kernel(SingletonConfigurable):
File "/home/bob/git/org/projcet/apps/content/.venv/lib/python3.8/site-packages/ipykernel/kernelbase.py", line 83, in Kernel
processes: t.Dict[str, psutil.Process] = {}
AttributeError: module 'psutil' has no attribute 'Process'
```
# Versions
- OS: Ubuntu 23:04
- Architecture: 64bit
- Psutil version: 5.9.5
`pip3 show psutil`
```
Name: psutil
Version: 5.9.5
Summary: Cross-platform lib for process and system monitoring in Python.
Home-page: https://github.com/giampaolo/psutil
Author: Giampaolo Rodola
Author-email: g.rodola@gmail.com
License: BSD-3-Clause
Location: /home/bob/git/project/name/apps/content/.venv/lib/python3.8/site-packages
Requires:
Required-by: ipykernel
```
Python version: Python 3.8.16
|
open
|
2023-07-06T23:10:23Z
|
2023-07-10T17:37:11Z
|
https://github.com/ipython/ipython/issues/14111
|
[] |
magick93
| 1
|
Lightning-AI/pytorch-lightning
|
data-science
| 20,409
|
Errors when deploying PyTorch Lightning Model to AWS SageMaker TrainingJobs: SMDDP does not support ReduceOp
|
### Bug description
Hi, I am trying to follow the DDP (Distributed Data Parallel) guidance ([Guide 1](https://aws.amazon.com/blogs/machine-learning/run-pytorch-lightning-and-native-pytorch-ddp-on-amazon-sagemaker-training-featuring-amazon-search/), [Guide 2](https://docs.aws.amazon.com/sagemaker/latest/dg/data-parallel-modify-sdp-pt-lightning.html)) and deploy my deep learning models to AWS SageMaker. However, when running it, I am encountering the following error. [1 instance, 4 GPUs]
May I ask how I can fix this error? Any suggestions or comments would be greatly appreciated!
### What version are you seeing the problem on?
v2.4
### How to reproduce the bug
```python
Model Code:
from torch.distributed import init_process_group, destroy_process_group
from torchmetrics.functional import pairwise_cosine_similarity
from lightning.pytorch.callbacks import EarlyStopping, ModelCheckpoint
from lightning.pytorch.loggers import CSVLogger
import smdistributed.dataparallel.torch.torch_smddp
from lightning.pytorch.strategies import DDPStrategy
from lightning.fabric.plugins.environments.lightning import LightningEnvironment
import lightning as pl
env = LightningEnvironment()
env.world_size = lambda: int(os.environ["WORLD_SIZE"])
env.global_rank = lambda: int(os.environ["RANK"])
def main(args):
train_samples = 1000
val_samples = 200
test_samples = 200
csv_logger = CSVLogger(save_dir=args.model_dir, name=args.modelname)
# Initialize the DataModule
data_module = ImagePairDataModule(
data_save_folder=args.data_dir,
train_samples=train_samples,
val_samples=val_samples,
test_samples=test_samples,
batch_size=args.batch_size,
num_workers=12,
)
# Initialize the model
model = Siamese()
# Configure checkpoint callback to save the best model
checkpoint_callback = ModelCheckpoint(
monitor="val_loss", # Monitor validation loss
dirpath=args.model_dir, # Directory to save model checkpoints
filename="best-checkpoint-test", # File name of the checkpoint
save_top_k=1, # Only save the best model
mode="min", # Save when validation loss is minimized
save_on_train_epoch_end=False,
)
# Configure early stopping to stop training if validation loss doesn't improve
early_stopping_callback = EarlyStopping(
monitor="val_loss", # Monitor validation loss
patience=args.patience, # Number of epochs with no improvement before stopping
mode="min", # Stop when validation loss is minimized
)
# Set up ddp on SageMaker
# https://docs.aws.amazon.com/sagemaker/latest/dg/data-parallel-modify-sdp-pt-lightning.html
ddp = DDPStrategy(
cluster_environment=env,
process_group_backend="smddp",
accelerator="gpu"
)
# Initialize the PyTorch Lightning Trainer
trainer = pl.Trainer(
max_epochs=args.epochs,
strategy=ddp, # Distributed Data Parallel strategy
devices=torch.cuda.device_count(), # Use all available GPUs
precision=16, # Use mixed precision (16-bit)
callbacks=[checkpoint_callback, early_stopping_callback],
log_every_n_steps=10,
logger=csv_logger,
)
# Train the model
trainer.fit(model, datamodule=data_module)
best_model_path = checkpoint_callback.best_model_path
print(f"Saving best model to: {best_model_path}")
# Destroy the process group if distributed training was initialized
if torch.distributed.is_initialized():
torch.distributed.destroy_process_group()
if __name__ == "__main__":
# Set up argument parser for command-line arguments
parser = argparse.ArgumentParser()
# Adding arguments
parser.add_argument('--epochs', type=int, default=10)
parser.add_argument('--batch_size', type=int, default=256)
parser.add_argument('--patience', type=int, default=10)
parser.add_argument('--modelname', type=str, default='testing_model')
# Container environment
parser.add_argument("--hosts", type=list, default=json.loads(os.environ["SM_HOSTS"]))
parser.add_argument("--current-host", type=str, default=os.environ["SM_CURRENT_HOST"])
parser.add_argument("--model-dir", type=str, default=os.environ["SM_MODEL_DIR"])
parser.add_argument("--data-dir", type=str, default=os.environ["SM_CHANNEL_TRAINING"])
parser.add_argument("--num-gpus", type=int, default=os.environ["SM_NUM_GPUS"])
# Parse arguments
args = parser.parse_args()
# Ensure the model directory exists
os.makedirs(args.model_dir, exist_ok=True)
# Launch the main function
main(args)
```
Training Job Code:
```
hyperparameters_set = {
'epochs': 100, # Total number of epochs
'batch_size': 200, # Input batch size on each device
'patience': 10, # Early stopping patience
'modelname': model_task_name, # Name for the model
}
estimator = PyTorch(
entry_point = "model_01.py",
source_dir = "./sage_code_300",
output_path = jobs_folder + "/",
code_location = jobs_folder,
role = role,
input_mode = 'FastFile',
py_version="py310",
framework_version="2.2.0",
instance_count=1,
instance_type="ml.g5.12xlarge",
hyperparameters=hyperparameters_set,
volume_size=800,
distribution={'pytorchddp': {'enabled': True}},
dependencies=["./sage_code/requirements.txt"],
)
estimator.fit({"training": inputs},
job_name = job_name,
wait = False,
logs=True)
```
### Error messages and logs
```
2024-11-09 00:25:02 Uploading - Uploading generated training model
2024-11-09 00:25:02 Failed - Training job failed
---------------------------------------------------------------------------
UnexpectedStatusException Traceback (most recent call last)
Cell In[26], line 34
14 estimator = PyTorch(
15 entry_point = "model_01.py",
16 source_dir = "./sage_code_300",
(...)
29 dependencies=["./sage_code_300/requirements.txt"],
30 )
32 ######### Run the model #############
33 # Send the model to sage training jobs
---> 34 estimator.fit({"training": inputs},
35 job_name = job_name,
36 wait = True, # True
37 logs=True)
40 model_will_save_path = os.path.join(jobs_folder, job_name, "output", "model.tar.gz")
41 print(f"\nModel is saved at:\n\n{model_will_save_path}")
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/sagemaker/workflow/pipeline_context.py:346, in runnable_by_pipeline.<locals>.wrapper(*args, **kwargs)
342 return context
344 return _StepArguments(retrieve_caller_name(self_instance), run_func, *args, **kwargs)
--> 346 return run_func(*args, **kwargs)
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/sagemaker/estimator.py:1376, in EstimatorBase.fit(self, inputs, wait, logs, job_name, experiment_config)
1374 forward_to_mlflow_tracking_server = True
1375 if wait:
-> 1376 self.latest_training_job.wait(logs=logs)
1377 if forward_to_mlflow_tracking_server:
1378 log_sagemaker_job_to_mlflow(self.latest_training_job.name)
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/sagemaker/estimator.py:2750, in _TrainingJob.wait(self, logs)
2748 # If logs are requested, call logs_for_jobs.
2749 if logs != "None":
-> 2750 self.sagemaker_session.logs_for_job(self.job_name, wait=True, log_type=logs)
2751 else:
2752 self.sagemaker_session.wait_for_job(self.job_name)
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/sagemaker/session.py:5945, in Session.logs_for_job(self, job_name, wait, poll, log_type, timeout)
5924 def logs_for_job(self, job_name, wait=False, poll=10, log_type="All", timeout=None):
5925 """Display logs for a given training job, optionally tailing them until job is complete.
5926
5927 If the output is a tty or a Jupyter cell, it will be color-coded
(...)
5943 exceptions.UnexpectedStatusException: If waiting and the training job fails.
5944 """
-> 5945 _logs_for_job(self, job_name, wait, poll, log_type, timeout)
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/sagemaker/session.py:8547, in _logs_for_job(sagemaker_session, job_name, wait, poll, log_type, timeout)
8544 last_profiler_rule_statuses = profiler_rule_statuses
8546 if wait:
-> 8547 _check_job_status(job_name, description, "TrainingJobStatus")
8548 if dot:
8549 print()
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/sagemaker/session.py:8611, in _check_job_status(job, desc, status_key_name)
8605 if "CapacityError" in str(reason):
8606 raise exceptions.CapacityError(
8607 message=message,
8608 allowed_statuses=["Completed", "Stopped"],
8609 actual_status=status,
8610 )
-> 8611 raise exceptions.UnexpectedStatusException(
8612 message=message,
8613 allowed_statuses=["Completed", "Stopped"],
8614 actual_status=status,
8615 )
UnexpectedStatusException: Error for Training job model-res50-300k-noft-contra-aug-2024-11-09-00-18-33: Failed. Reason: AlgorithmError: ExecuteUserScriptError:
ExitCode 1
ErrorMessage "RuntimeError: SMDDP does not support: ReduceOp
Traceback (most recent call last)
File "/opt/conda/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/opt/conda/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/opt/conda/lib/python3.10/site-packages/mpi4py/__main__.py", line 7, in <module>
main()
File "/opt/conda/lib/python3.10/site-packages/mpi4py/run.py", line 230, in main
run_command_line(args)
File "/opt/conda/lib/python3.10/site-packages/mpi4py/run.py", line 47, in run_command_line
run_path(sys.argv[0], run_name='__main__')
File "/opt/conda/lib/python3.10/runpy.py", line 289, in run_path
return _run_module_code(code, init_globals, run_name,
File "/opt/conda/lib/python3.10/runpy.py", line 96, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "s4_mod_pl_cloud_01.py", line 369, in <module>
main(args)
File "s4_mod_pl_cloud_01. Check troubleshooting guide for common errors: https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-python-sdk-troubleshooting.html
```
### Environment
<details>
<summary>Current environment</summary>
```
#- PyTorch Lightning Version (e.g., 2.4.0): 2.4.0
#- PyTorch Version (e.g., 2.4): 2.2
#- Python version (e.g., 3.12): 3.10
#- OS (e.g., Linux): Linux2
#- CUDA/cuDNN version: 12.4
#- GPU models and configuration: A10G
#- How you installed Lightning(`conda`, `pip`, source): pip
```
</details>
### More info
_No response_
|
closed
|
2024-11-09T00:58:05Z
|
2024-11-09T20:40:07Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/20409
|
[
"bug",
"needs triage",
"ver: 2.4.x"
] |
ZihanChen1995
| 0
|
pallets/flask
|
flask
| 5,268
|
When was 'Blueprint.before_app_first_request' deprecated and what replaces it?
|
I have resumed work on a Flask 1.x-based project after some time has passed, and sorting through deprecations and removals that have happened in the interim. The first place I got stuck was the `@bp.before_app_first_request` decorator.
Except to note that <q>`app.before_first_request` and `bp.before_app_first_request` decorators are removed</q> in 2.3.0, [the changelog](https://flask.palletsprojects.com/en/2.3.x/changes/#version-2-3-0) doesn't mention when these were deprecated, why, or what replaced them.
Perusing previous versions of the docs, I discover that `Blueprint.before_app_first_request` was deprecated in 2.2, with this note:
> Deprecated since version 2.2: Will be removed in Flask 2.3. Run setup code when creating the application instead.
Something in my gut tells me that I'm going to be writing a kind of hook handler that will iterate through all the blueprints to run their "before first request" initialization routines. Effectively, an amateurish reimplementation of what `Blueprint.before_app_first_request` already did.
That leaves me wondering about the context for this deprecation. Can someone enlighten me as to why it was removed?
_To be clear, I intend to document whatever workaround I come up with (for the benefit of others in this situation) and close this issue myself. I'm not asking for tech support or "homework help" from the Flask maintainers, or to resurrect the feature from the grave. `:)`_
|
closed
|
2023-09-29T23:06:44Z
|
2023-10-15T00:06:15Z
|
https://github.com/pallets/flask/issues/5268
|
[] |
ernstki
| 1
|
slackapi/python-slack-sdk
|
asyncio
| 1,107
|
slack_sdk.rtm_v2.RTMClient.on with a custom callback
|
In the v1 RTM client there was a way to specify a custom callback, so I could do something like this:
```
import slack_sdk
class SlackBot:
def __init__(self, token):
self.rtm = slack_sdk.rtm_v2.RTMClient(
token=self.config["bot_access_token"]
)
# How to do the decorator here when `rtm` is defined in __init__
def handle(self, client, event):
# Do something interesting
bot = SlackBot(token)
slack_sdk.rtm.RTMClient.on(event="message", callback=bot.handle)
```
But in the v2 RTM client the documentation doesn't call out any way to do this same custom callback.
The current document has a very simple use case for this functionality, but doesn't seem to 'scale' to more complex usages of the v2 RTM client.
Another example solving for this use case would be great!
```
import slack_sdk
class SlackBot:
def __init__(self, token):
self.rtm = slack_sdk.rtm_v2.RTMClient(
token=self.config["bot_access_token"]
)
# How to do the decorator here when `rtm` is defined in __init__
def handle(self, client, event):
# Do something interesting
bot = SlackBot(token)
# Or somehow register `bot.handle` here against the RTMClient?
```
### The page URLs
- https://slack.dev/python-slack-sdk/real_time_messaging.html
|
closed
|
2021-08-28T01:50:45Z
|
2021-10-21T21:38:33Z
|
https://github.com/slackapi/python-slack-sdk/issues/1107
|
[
"question",
"rtm-client",
"Version: 3x"
] |
michael-robbins
| 9
|
apify/crawlee-python
|
automation
| 804
|
Circular dependnency on from crawlee.storages import Dataset
|
Currently `from crawlee.storages import Dataset ` will fail on circular dependency. This makes also some of our examples fail on circular import for example try this one: https://crawlee.dev/python/docs/examples/beautifulsoup-crawler
Restructure the code to prevent the circular dependency.
Probably introduced by `from crawlee import service_container` that was [recently added](https://github.com/apify/crawlee-python/blame/master/src/crawlee/storages/_key_value_store.py#L7).
|
closed
|
2024-12-11T12:54:07Z
|
2024-12-11T13:33:15Z
|
https://github.com/apify/crawlee-python/issues/804
|
[
"bug",
"t-tooling",
"v0.5"
] |
Pijukatel
| 0
|
deepspeedai/DeepSpeed
|
deep-learning
| 6,636
|
[BUG] if not install cuda๏ผpip3 install deepspeed==0.14.0, failed in installed_cuda_version()
|
**Describe the bug**
not install cuda๏ผ and not cpu๏ผgpu device.
pip3 install deepspeed==0.14.0๏ผ failed in installed_cuda_version()
**To Reproduce**
Steps to reproduce the behavior:
not install cuda in sys,
pip3 install deepspeed==0.14.0
**Expected behavior**
A clear and concise description of what you expected to happen.
**ds_report output**
Preparing metadata (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: /usr/bin/python3 -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-1kmwvije/deepspeed_6c20884bfa084655b99473b9abf58814/setup.py'"'"'; __file__='"'"'/tmp/pip-install-1kmwvije/deepspeed_6c20884bfa084655b99473b9abf58814/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-2j2fd59_
cwd: /tmp/pip-install-1kmwvije/deepspeed_6c20884bfa084655b99473b9abf58814/
Complete output (25 lines):
[2024-10-17 08:56:34,510] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect)
.......................................
File "<string>", line 1, in <module>
File "/tmp/pip-install-1kmwvije/deepspeed_6c20884bfa084655b99473b9abf58814/setup.py", line 100, in <module>
cuda_major_ver, cuda_minor_ver = installed_cuda_version()
File "/tmp/pip-install-1kmwvije/deepspeed_6c20884bfa084655b99473b9abf58814/op_builder/builder.py", line 50, in installed_cuda_version
raise MissingCUDAException("CUDA_HOME does not exist, unable to compile CUDA op(s)")
op_builder.builder.MissingCUDAException: CUDA_HOME does not exist, unable to compile CUDA op(s)
----------------------------------------
..............................................................................
**Screenshots**
If applicable, add screenshots to help explain your problem.
**System info (please complete the following information):**
- OS: [e.g. Ubuntu 18.04]
- GPU count and types [e.g. two machines with x8 A100s each]
- Interconnects (if applicable) [e.g., two machines connected with 100 Gbps IB]
- Python version
- Any other relevant info about your setup
**Launcher context**
Are you launching your experiment with the `deepspeed` launcher, MPI, or something else?
**Docker context**
Are you using a specific docker image that you can share?
**Additional context**
Add any other context about the problem here.
|
closed
|
2024-10-17T09:12:14Z
|
2024-10-22T01:50:29Z
|
https://github.com/deepspeedai/DeepSpeed/issues/6636
|
[
"bug",
"build"
] |
hijeffwu
| 4
|
tensorflow/tensor2tensor
|
machine-learning
| 1,565
|
MultiProblem hparams, loss increasing
|
### Description
I am trying to use multi-problem. Think of my problem as translating an English layman text to an English domain expert language - like layman text to legal text, or something like that. From a language perspective, both sides of the translation are English with slightly different characteristics. I want to pretrain an unsupervised language model for "legal English" and use the pretrained language model to improve the performance of translator based on Transformer.
My loss has been diverging for the problem no matter how much I reduce the step size. This model uses transformer_tiny, with mixing_schedule=pretrain, and learning_rate = 0.002.
My problem definition is as follows:
>
> @registry.register_problem
> class LanguagemodelLaymanTranslate(multi_problem.MultiProblem):
> """Pretrain using keyword data, and then train on translation data QK"""
>
> @property
> def num_generate_tasks(self):
> return 1
>
> def prepare_to_generate(self, data_dir, tmp_dir):
> pass
>
> def __init__(self, was_reversed=False, was_copy=False):
> super(LanguagemodelLaymanTranslate, self).__init__(was_reversed, was_copy)
>
> self.task_list.append(LanguagemodelLegaltextV1())
> self.task_list.append(TranslateLaymantolegalV1())
>
> @property
> def vocab_type(self):
> return text_problems.VocabType.SUBWORD
>

The hparams are as follows:
```
> @registry.register_hparams
> def transformer_pretrain_lm_tiny():
> """Hparams for transformer on LM pretraining on TPU, large model."""
> hparams = transformer_tiny()
> hparams.batch_size = 128
> hparams.multiproblem_mixing_schedule = "pretrain"
> hparams.learning_rate = 0.001
> return hparams
```
The component problems are standard text2text problems or text2self problems. I am learning the vocab from language model, and returning it in self.vocab_filename in the translation problem.
...
### Environment information
```
OS: <your answer here>
$ pip freeze | grep tensor
tensor2tensor==1.12.0
tensorboard==1.12.2
tensorflow-metadata==0.9.0
tensorflow-probability==0.5.0
$ python -V
Python 3.6.6
```
Am I doing it right? I am following the documentation in
https://github.com/tensorflow/tensor2tensor/blob/master/docs/multi_problem.md
I can provide any information as needed in addition to what's above. Can anyone advise me on how to proceed?
|
open
|
2019-05-01T00:34:57Z
|
2019-05-01T00:34:57Z
|
https://github.com/tensorflow/tensor2tensor/issues/1565
|
[] |
tensorator
| 0
|
Miserlou/Zappa
|
django
| 1,956
|
ReadTimeoutError on Zappa Deploy
|
After deploying my Flask app successfully several times, `zappa deploy` stopped working. When I try to deploy, I get the following error:
```
Read timeout on endpoint URL: "{not disclosing my URL for security reasons}"
Error: Unable to upload to S3. Quitting.
```
## Expected Behavior
When deploying, Zappa should create (and has created using this machine in the past) all necessary resources and upload my zipped project to S3.
## Actual Behavior
The app package is created and zipped, then the IAM policy and S3 bucket are created successfully. Then I see a progress bar (which does not update regularly, but only a handful times over the course of several minutes). After ~5 minutes or so, the progress bar disappears. Then, after another ~5 minutes, I get the error message above.
I have replaced the statement that `print`s this exception with a call to `logging.exception`, which gave the following stacktrace:
```
13:19 $ zappa deploy
Calling deploy for stage dev..
Downloading and installing dependencies..
- psycopg2-binary==2.8.4: Using locally cached manylinux wheel
- markupsafe==1.1.1: Using locally cached manylinux wheel
- sqlite==python3: Using precompiled lambda package
'python3.7'
Packaging project as zip.
Uploading fabian-test-zappa-poging-12006-dev-1573215672.zip (23.4MiB)..
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 24.5M/24.5M [05:40<00:00, 20.4KB/s]ERROR:Read timeout on endpoint URL: "https://fabian-test-zappa-12006.s3.eu-west-1.amazonaws.com/fabian-test-zappa-poging-12006-dev-1573215672.zip?uploadId=u6kP9nuJRwK0wFBFOVa9kIJECBYftFW3p1a0o__Ne9SxKT4i7jmvY_0rwJ4uy7Zapo0JzAFRgrn9nwJN4RothWiO9_7G_MXhQUAistjbJ9QVmWBX0ZnZUEGB572T2jjH&partNumber=1"
Traceback (most recent call last):
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/urllib3/connectionpool.py", line 421, in _make_request
six.raise_from(e, None)
File "<string>", line 3, in raise_from
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/urllib3/connectionpool.py", line 416, in _make_request
httplib_response = conn.getresponse()
File "/usr/lib/python3.7/http/client.py", line 1321, in getresponse
response.begin()
File "/usr/lib/python3.7/http/client.py", line 296, in begin
version, status, reason = self._read_status()
File "/usr/lib/python3.7/http/client.py", line 257, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "/usr/lib/python3.7/socket.py", line 589, in readinto
return self._sock.recv_into(b)
File "/usr/lib/python3.7/ssl.py", line 1052, in recv_into
return self.read(nbytes, buffer)
File "/usr/lib/python3.7/ssl.py", line 911, in read
return self._sslobj.read(len, buffer)
socket.timeout: The read operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/httpsession.py", line 263, in send
chunked=self._chunked(request.headers),
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/urllib3/connectionpool.py", line 720, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/urllib3/util/retry.py", line 376, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/urllib3/packages/six.py", line 735, in reraise
raise value
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/urllib3/connectionpool.py", line 672, in urlopen
chunked=chunked,
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/urllib3/connectionpool.py", line 423, in _make_request
self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/urllib3/connectionpool.py", line 331, in _raise_timeout
self, url, "Read timed out. (read timeout=%s)" % timeout_value
urllib3.exceptions.ReadTimeoutError: AWSHTTPSConnectionPool(host='fabian-test-zappa-12006.s3.eu-west-1.amazonaws.com', port=443): Read timed out. (read timeout=60)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/zappa/core.py", line 962, in upload_to_s3
Callback=progress.update
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/boto3/s3/inject.py", line 131, in upload_file
extra_args=ExtraArgs, callback=Callback)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/boto3/s3/transfer.py", line 279, in upload_file
future.result()
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/s3transfer/futures.py", line 106, in result
return self._coordinator.result()
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/s3transfer/futures.py", line 265, in result
raise self._exception
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/s3transfer/tasks.py", line 126, in __call__
return self._execute_main(kwargs)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/s3transfer/tasks.py", line 150, in _execute_main
return_value = self._main(**kwargs)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/s3transfer/upload.py", line 722, in _main
Body=body, **extra_args)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/client.py", line 648, in _make_api_call
operation_model, request_dict, request_context)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/client.py", line 667, in _make_request
return self._endpoint.make_request(operation_model, request_dict)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/endpoint.py", line 102, in make_request
return self._send_request(request_dict, operation_model)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/endpoint.py", line 137, in _send_request
success_response, exception):
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/endpoint.py", line 231, in _needs_retry
caught_exception=caught_exception, request_dict=request_dict)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/hooks.py", line 356, in emit
return self._emitter.emit(aliased_event_name, **kwargs)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/hooks.py", line 228, in emit
return self._emit(event_name, kwargs)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/hooks.py", line 211, in _emit
response = handler(**kwargs)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/retryhandler.py", line 183, in __call__
if self._checker(attempts, response, caught_exception):
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/retryhandler.py", line 251, in __call__
caught_exception)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/retryhandler.py", line 277, in _should_retry
return self._checker(attempt_number, response, caught_exception)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/retryhandler.py", line 317, in __call__
caught_exception)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/retryhandler.py", line 223, in __call__
attempt_number, caught_exception)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/retryhandler.py", line 359, in _check_caught_exception
raise caught_exception
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/endpoint.py", line 200, in _do_get_response
http_response = self._send(request)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/endpoint.py", line 244, in _send
return self.http_session.send(request)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/httpsession.py", line 289, in send
raise ReadTimeoutError(endpoint_url=request.url, error=e)
botocore.exceptions.ReadTimeoutError: Read timeout on endpoint URL: "https://fabian-test-zappa-12006.s3.eu-west-1.amazonaws.com/fabian-test-zappa-poging-12006-dev-1573215672.zip?uploadId=W5Ishdue_X_UFTLhPVDQ.TCR600JN1GxNHEGE9RkRaY.fWWElxjvSDQ2IOiwH.A4eg7fCYBUNjBdWikVY9Mz3nPtIBSgK3MkShShiMRtpcKfC2uW_jniCCblTJsMkKaE&partNumber=2"
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/zappa/core.py", line 965, in upload_to_s3
self.s3_client.upload_file(source_path, bucket_name, dest_path)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/boto3/s3/inject.py", line 131, in upload_file
extra_args=ExtraArgs, callback=Callback)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/boto3/s3/transfer.py", line 279, in upload_file
future.result()
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/s3transfer/futures.py", line 106, in result
return self._coordinator.result()
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/s3transfer/futures.py", line 265, in result
raise self._exception
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/s3transfer/tasks.py", line 126, in __call__
return self._execute_main(kwargs)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/s3transfer/tasks.py", line 150, in _execute_main
return_value = self._main(**kwargs)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/s3transfer/upload.py", line 722, in _main
Body=body, **extra_args)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/client.py", line 648, in _make_api_call
operation_model, request_dict, request_context)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/client.py", line 667, in _make_request
return self._endpoint.make_request(operation_model, request_dict)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/endpoint.py", line 102, in make_request
return self._send_request(request_dict, operation_model)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/endpoint.py", line 137, in _send_request
success_response, exception):
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/endpoint.py", line 231, in _needs_retry
caught_exception=caught_exception, request_dict=request_dict)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/hooks.py", line 356, in emit
return self._emitter.emit(aliased_event_name, **kwargs)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/hooks.py", line 228, in emit
return self._emit(event_name, kwargs)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/hooks.py", line 211, in _emit
response = handler(**kwargs)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/retryhandler.py", line 183, in __call__
if self._checker(attempts, response, caught_exception):
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/retryhandler.py", line 251, in __call__
caught_exception)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/retryhandler.py", line 277, in _should_retry
return self._checker(attempt_number, response, caught_exception)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/retryhandler.py", line 317, in __call__
caught_exception)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/retryhandler.py", line 223, in __call__
attempt_number, caught_exception)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/retryhandler.py", line 359, in _check_caught_exception
raise caught_exception
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/endpoint.py", line 200, in _do_get_response
http_response = self._send(request)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/endpoint.py", line 244, in _send
return self.http_session.send(request)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/httpsession.py", line 289, in send
raise ReadTimeoutError(endpoint_url=request.url, error=e)
botocore.exceptions.ReadTimeoutError: Read timeout on endpoint URL: "{not disclosing the URL for obvious reasons}"
Error: Unable to upload to S3. Quitting.
```
## Possible Fix
I know no possible fix, but I have tested the following:
My teammates can `zappa deploy` with the exact code/configuration that I use, using their machines.
They can also use `zappa deploy` using an aws profile with an access key to my account, which means this is probably not a permissions issue.
I have deployed to a different AZ, which gives the same error.
I have thrown away my repo and virtualenv and made a new one. The error persists.
I have successfully uploaded a file to this S3 bucket with boto3.
## Your Environment
* Zappa version used: `0.48.2`
* Operating System and Python version: Ubuntu 18.04, Python 3.7.5
* The output of `pip freeze`:
```
apispec==1.3.3
argcomplete==1.9.3
attrs==19.3.0
Babel==2.7.0
boto3==1.10.12
botocore==1.13.12
certifi==2019.9.11
cfn-flip==1.2.2
chardet==3.0.4
Click==7.0
colorama==0.4.1
defusedxml==0.6.0
docutils==0.15.2
durationpy==0.5
Flask==1.1.1
Flask-AppBuilder==2.2.0
Flask-Babel==0.12.2
Flask-JWT-Extended==3.24.1
Flask-Login==0.4.1
Flask-OpenID==1.2.5
Flask-SQLAlchemy==2.4.1
Flask-WTF==0.14.2
future==0.16.0
hjson==3.0.1
idna==2.8
importlib-metadata==0.23
itsdangerous==1.1.0
Jinja2==2.10.3
jmespath==0.9.3
jsonschema==3.1.1
kappa==0.6.0
lambda-packages==0.20.0
MarkupSafe==1.1.1
marshmallow==2.19.5
marshmallow-enum==1.5.1
marshmallow-sqlalchemy==0.19.0
more-itertools==7.2.0
pipdeptree==0.13.2
placebo==0.9.0
prison==0.1.2
psycopg2-binary==2.8.4
PyJWT==1.7.1
pyrsistent==0.15.5
python-dateutil==2.6.1
python-slugify==1.2.4
python3-openid==3.1.0
pytz==2019.3
PyYAML==5.1.2
requests==2.22.0
s3transfer==0.2.1
six==1.13.0
SQLAlchemy==1.3.10
SQLAlchemy-Utils==0.35.0
toml==0.10.0
tqdm==4.19.1
troposphere==2.5.2
Unidecode==1.1.1
urllib3==1.25.6
Werkzeug==0.16.0
wsgi-request-logger==0.4.6
WTForms==2.2.1
zappa==0.48.2
zipp==0.6.0
```
* Link to your project (optional):
* Your `zappa_settings.py`:
```
{
"dev": {
"app_function": "app.app_init.app",
"aws_region": "eu-west-1",
"profile_name": "default",
"project_name": "fabian-test-zappa-poging-12006",
"runtime": "python3.7",
"s3_bucket": "fabian-test-zappa-12006"
}
}
```
|
open
|
2019-11-08T13:19:05Z
|
2022-12-02T11:55:13Z
|
https://github.com/Miserlou/Zappa/issues/1956
|
[] |
Faaab
| 1
|
tableau/server-client-python
|
rest-api
| 1,233
|
cgi module is deprecated and will be removed in python 3.13
|
Per [PEP-0594](https://peps.python.org/pep-0594/), the cgi module will be removed in the 3.13 release. Starting to see notices popping up for this now.
INFO - /usr/local/lib/python3.11/site-packages/tableauserverclient/server/endpoint/datasources_endpoint.py:1: DeprecationWarning: 'cgi' is deprecated and slated for removal in Python 3.13
|
open
|
2023-05-18T20:03:59Z
|
2024-10-11T19:05:19Z
|
https://github.com/tableau/server-client-python/issues/1233
|
[
"infra"
] |
shortywz
| 2
|
iMerica/dj-rest-auth
|
rest-api
| 161
|
Idea: Allow session usage without Token model
|
# Use case
Sometimes it's desirable to have a SPA that only uses session authentication. Sessions are very simple and work sufficiently well when the SPA is running all under one domain.
# Problem
It's not obvious how to do this. The quickstart suggests enabling `rest_framework.authtoken`. There is a setting to use JWT authentication instead. But there is no obvious way to use sessions and only session.
# Workaround
We can fake auth tokens with "no op" functions and classes.
```
REST_AUTH_TOKEN_MODEL = "utils.NoopModel"
REST_AUTH_TOKEN_CREATOR = "utils.noop_token_creator"
```
```
def noop_token_creator(token_model, user, serializer):
return None
class NoopModel:
pass
```
An alternative workaround is to enable auth token even though it isn't used. This can lead to problems such as [this](https://github.com/encode/django-rest-framework/issues/7561) which are not obvious how to around. Custom logic around user models and login views must account for unnecessary token creation.
In either case, the workaround involves either going out of ones way to support tokens, which are then never used, or ensuring they get disabled at all times.
|
closed
|
2020-10-30T16:54:15Z
|
2021-12-20T16:33:56Z
|
https://github.com/iMerica/dj-rest-auth/issues/161
|
[] |
bufke
| 7
|
pydata/xarray
|
numpy
| 9,437
|
Example datatree for use in tutorial documentation
|
### What is your issue?
Copied from https://github.com/xarray-contrib/datatree/issues/100 additional comments are there.
## Example datatree for use in tutorial documentation
What would help me enormously with [writing documentation](https://github.com/xarray-contrib/datatree/issues/61) would be a killer example datatree, which I could open and use to demonstrate use of all types of methods. Just like we have the `"air_temperature"` example dataset [used in the main xarray documentation](https://docs.xarray.dev/en/stable/user-guide/plotting.html#imports).
To be as useful as possible, this example tree should hit a few criteria:
- [ ] **Nested** - there needs to be some reason why you wouldn't just use a Dataset to organise this data. Multiple resolutions is a simple reason, but it also should be >1 level deep.
- [ ] **Common coordinates** - it should have a least one common coordinate stored closer to the root of the tree. For example a reference normalisation value of some quantity, or perhaps some grid-related information that applies to the data in multiple sub-groups.
- [ ] **Heterogenous data** - there is no restriction on the relationship between data in different nodes, so we should demonstrate this by storing data that is as different as possible (but still somehow related). I'm thinking maybe some demographic data vs geographical, or model data vs observational.
- [ ] **Small** - though we would download this with pooch instead of uploading the data files in the repo, we still want this to be small enough that we don't cause problems when building or viewing our docs.
- [ ] **Multidimensional** - the data stored in the leaves needs to have enough dimensions so that I can reduce/aggregate it and still have something interesting left to plot.
- [ ] **Recognisable** - Ideally it would contain some relatable data. The existing Dataset example is nice because you can immediately see you are looking at a (low-resolution) map of North America. Maybe a satellite image of Manhattan Island or something?
A really good inspiration is this pseudo-structure provided in https://github.com/pydata/xarray/issues/4118:

This would hit all of the criteria above, if it actually existed somewhere I could find!
What I would like is for people who have more familiarity with real geo-science data products to help me make this killer example tree, or at least point me towards data that I might use.
If we have multiple good suggestions I could make multiple different examples to use, but I think I would prefer one really good one to multiple quite good ones. Alternatively any extras could end up getting used for some future example notebooks though.
|
open
|
2024-09-07T14:05:59Z
|
2024-09-07T14:05:59Z
|
https://github.com/pydata/xarray/issues/9437
|
[
"topic-documentation",
"topic-DataTree"
] |
eni-awowale
| 0
|
Textualize/rich
|
python
| 2,498
|
Colored text incorrect width.
|
Using color markers within a Panel messes up the border (probably misestimates the width of the string due to control characters).
Note: top Panel is uncolored, its yellow because of my terminal theme.

Code sample:
```
from rich.panel import Panel
from rich import print
red = "\033[31m"
reset = "\033[0m"
print(Panel(f"Hello in RED!"))
print(Panel(f"{red}Hello in RED!{reset}"))
```
Environment:
```
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ <class 'rich.console.Console'> โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ A high level console interface. โ
โ โ
โ โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โ
โ โ <console width=430 ColorSystem.TRUECOLOR> โ โ
โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ โ
โ โ
โ color_system = 'truecolor' โ
โ encoding = 'utf-8' โ
โ file = <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'> โ
โ height = 83 โ
โ is_alt_screen = False โ
โ is_dumb_terminal = False โ
โ is_interactive = True โ
โ is_jupyter = False โ
โ is_terminal = True โ
โ legacy_windows = False โ
โ no_color = False โ
โ options = ConsoleOptions(size=ConsoleDimensions(width=430, height=83), legacy_windows=False, min_width=1, max_width=430, is_terminal=True, encoding='utf-8', max_height=83, justify=None, overflow=None, no_wrap=False, highlight=None, markup=None, height=None) โ
โ quiet = False โ
โ record = False โ
โ safe_box = True โ
โ size = ConsoleDimensions(width=430, height=83) โ
โ soft_wrap = False โ
โ stderr = False โ
โ style = None โ
โ tab_size = 8 โ
โ width = 430 โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
โญโโโ <class 'rich._windows.WindowsConsoleFeatures'> โโโโโฎ
โ Windows features available. โ
โ โ
โ โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โ
โ โ WindowsConsoleFeatures(vt=False, truecolor=False) โ โ
โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ โ
โ โ
โ truecolor = False โ
โ vt = False โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Environment Variables โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ {'TERM': 'xterm-kitty', 'COLORTERM': 'truecolor', 'CLICOLOR': None, 'NO_COLOR': None, 'TERM_PROGRAM': None, 'COLUMNS': None, 'LINES': None, 'JUPYTER_COLUMNS': None, 'JUPYTER_LINES': None, 'JPY_PARENT_PID': None, 'VSCODE_VERBOSE_LOGGING': None} โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
platform="Darwin"
rich==12.5.1
```
|
closed
|
2022-08-30T05:21:09Z
|
2022-08-30T15:33:03Z
|
https://github.com/Textualize/rich/issues/2498
|
[
"Needs triage"
] |
BrianPugh
| 3
|
allenai/allennlp
|
nlp
| 4,915
|
Timeline for 2.0?
|
Hi all,
I've noticed that there are mentions of a 2.0 release---while I understand that it's probably hard to estimate a particular release date, do you think it'll be on the order of days? weeks? months? Would be nice to have some information to better plan some new projects I'm currently starting now.
Thanks!
|
closed
|
2021-01-15T20:51:33Z
|
2021-01-15T21:00:28Z
|
https://github.com/allenai/allennlp/issues/4915
|
[
"question"
] |
nelson-liu
| 3
|
vchaptsev/cookiecutter-django-vue
|
graphql
| 44
|
Problem loading page
|
After I run `sudo docker-compose up --build`, I see
```
frontend_1 | App running at:
frontend_1 | - Local: http://localhost:8080/
frontend_1 |
frontend_1 | It seems you are running Vue CLI inside a container.
frontend_1 | Access the dev server via http://localhost:8080
frontend_1 |
frontend_1 | Note that the development build is not optimized.
frontend_1 | To create a production build, run npm run build.
frontend_1 |
```
However, I cannot connect to localhost 8080 in the browser. Am I missing a step?
|
closed
|
2019-10-13T17:39:10Z
|
2019-10-14T17:55:35Z
|
https://github.com/vchaptsev/cookiecutter-django-vue/issues/44
|
[] |
eScribMac
| 1
|
grillazz/fastapi-sqlalchemy-asyncpg
|
pydantic
| 92
|
low carbon footprint with good cache :)
|
- every database update could trigger refresh of all cache values or clear of all cache values
- it can be part of save and update base meths
|
closed
|
2023-06-09T07:35:41Z
|
2023-08-02T08:55:54Z
|
https://github.com/grillazz/fastapi-sqlalchemy-asyncpg/issues/92
|
[] |
grillazz
| 0
|
vanna-ai/vanna
|
data-visualization
| 802
|
FerratDB
|
I would like to be able to work through **FerratDB which is a wrapper for PostgeSQL**
https://docs.ferretdb.io/
I want to interact with json data inside sql
I would like to use mongoDB but it is not SQL
so I think FerratDB is a great option
as a result we will get a **search like regular sql but with the ability to easily work with json**
|
open
|
2025-03-11T06:33:34Z
|
2025-03-11T06:33:34Z
|
https://github.com/vanna-ai/vanna/issues/802
|
[] |
pit123-00
| 0
|
developmentseed/lonboard
|
data-visualization
| 8
|
`Map.save` for offline (outside of Python) use
|
- Serialize the Arrow table to a base64 string
- Create an HTML file with data, layer parameters, etc
|
closed
|
2023-09-29T18:11:09Z
|
2023-11-03T21:48:51Z
|
https://github.com/developmentseed/lonboard/issues/8
|
[
"javascript"
] |
kylebarron
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.