repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
ryfeus/lambda-packs
|
numpy
| 20
|
Magpie package : Multi label text classification
|
Hi, great work ! You inspired me to build my own package for a project that you can find on my [repo](https://github.com/Sach97/serverless-multilabel-text-classification). Unfortunately I'm facing an issue.
I have this [error](https://github.com/Sach97/serverless-multilabel-text-classification/issues/1). Have you already seen this type of error before ?
|
closed
|
2018-04-30T20:50:46Z
|
2018-05-02T18:15:10Z
|
https://github.com/ryfeus/lambda-packs/issues/20
|
[] |
sachaarbonel
| 1
|
aiortc/aiortc
|
asyncio
| 332
|
Receiving parallel to sending frames
|
I am trying to modify server example to make receiving and sending frames parallel.
The problem appeared when I've modified frame processing and it became too long. So I need to find out the way to deal with it. Now I see 2 ways to do it.
1. Understand input frame queue length. So I can read more when queue is too big.
2. Read frames as fast as it comes in parallel process.
For the first way I don't understand how to get queue length. Is it possible for WebRTC at all? Maybe this way is wrong.
For the second way I've tried to run in parallel process but in parallel receiving frames not worked in my implementation. believe it's more possible than first way but can't do it for now.
|
closed
|
2020-04-09T10:22:19Z
|
2021-03-07T14:52:33Z
|
https://github.com/aiortc/aiortc/issues/332
|
[] |
Alick09
| 3
|
feder-cr/Jobs_Applier_AI_Agent_AIHawk
|
automation
| 293
|
Non Descriptive errors any advice:
|
```
Traceback (most recent call last):
File "App/HawkAI/src/linkedIn_job_manager.py", line 138, in apply_jobs
self.easy_applier_component.job_apply(job)
File "App/HawkAI/src/linkedIn_easy_applier.py", line 67, in job_apply
raise Exception(f"Failed to apply to job! Original exception: \nTraceback:\n{tb_str}")
Exception: Failed to apply to job! Original exception:
Traceback:
Traceback (most recent call last):
File "App/HawkAI/src/linkedIn_easy_applier.py", line 63, in job_apply
self._fill_application_form(job)
File "App/HawkAI/src/linkedIn_easy_applier.py", line 130, in _fill_application_form
if self._next_or_submit():
^^^^^^^^^^^^^^^^^^^^^^
File "App/HawkAI/src/linkedIn_easy_applier.py", line 145, in _next_or_submit
self._check_for_errors()
File "App/HawkAI/src/linkedIn_easy_applier.py", line 158, in _check_for_errors
raise Exception(f"Failed answering or file upload. {str([e.text for e in error_elements])}")
Exception: Failed answering or file upload. ['Please enter a valid answer', 'Please enter a valid answer']
```
any idea why it keeps failing to answer? lines are mostly about filling application or going next or submit or uploading.
|
closed
|
2024-09-05T19:39:07Z
|
2024-09-11T02:01:47Z
|
https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/293
|
[] |
elonbot
| 3
|
deepfakes/faceswap
|
deep-learning
| 595
|
Adjust convert does not work for GAN and GAN128 model
|
Adjust Convert plugin fails in case of GAN or GAN128 model
## Expected behavior
Adjust convert completes w/o errors
## Actual behavior
GAN and GAN128 fails with errors (please see pic attached)
## Steps to reproduce
1) Extract faces from images and train GAN or GAN128 model
2) Perform python faceswap convert with Adjust plugin
## Other relevant information
- **Command lined used (GAN128)**: faceswap.py convert -i "path/to/images" -o "output/path" -m "path/to/GAN128/model" -t GAN128 -c Adjust
- **Operating system and version:** Ubuntu 16.04.5 LTS
- **Python version:** 3.5,2,
- **Faceswap version:** 4376bbf4f85f9771b0e3752ccf9504efb4e43d21
- **Faceswap method:** GPU

|
closed
|
2019-01-23T23:19:49Z
|
2019-01-23T23:37:18Z
|
https://github.com/deepfakes/faceswap/issues/595
|
[] |
temp-42
| 2
|
supabase/supabase-py
|
fastapi
| 280
|
Error when running this code
|
When i run this code i get an error
```
import os
from supabase import create_client, Client
url: str = os.environ.get("SUPABASE_URL")
key: str = os.environ.get("SUPABASE_KEY")
supabase: Client = create_client(url, key)
# Create a random user login email and password.
random_email: str = "cam@example.com"
random_password: str = "9696"
user = supabase.auth.sign_up(email=random_email, password=random_password)
```
here's the error message
```
Traceback (most recent call last):
File "C:\Users\Camer\AppData\Local\Programs\Python\Python310\lib\site-packages\gotrue\helpers.py", line 16, in check_response
response.raise_for_status()
File "C:\Users\Camer\AppData\Local\Programs\Python\Python310\lib\site-packages\httpx\_models.py", line 1508, in
raise_for_status
raise HTTPStatusError(message, request=request, response=self)
httpx.HTTPStatusError: Client error '422 Unprocessable Entity' for url 'https://gcaycwdskjdsfqvrtdqh.supabase.co/auth/v1/signup'
For more information check: https://httpstatuses.com/422
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Camer\Cams-Stuff\coding\pyTesting\main.py", line 11, in <module>
user = supabase.auth.sign_up(email=random_email, password=random_password)
File "C:\Users\Camer\AppData\Local\Programs\Python\Python310\lib\site-packages\gotrue\_sync\client.py", line 129, in sign_up
response = self.api.sign_up_with_email(
File "C:\Users\Camer\AppData\Local\Programs\Python\Python310\lib\site-packages\gotrue\_sync\api.py", line 140, in sign_up_with_email
return SessionOrUserModel.parse_response(response)
File "C:\Users\Camer\AppData\Local\Programs\Python\Python310\lib\site-packages\gotrue\types.py", line 26, in parse_response
check_response(response)
File "C:\Users\Camer\AppData\Local\Programs\Python\Python310\lib\site-packages\gotrue\helpers.py", line 18, in check_response
raise APIError.from_dict(response.json())
gotrue.exceptions.APIError
```
|
closed
|
2022-09-27T23:50:54Z
|
2022-10-07T08:35:21Z
|
https://github.com/supabase/supabase-py/issues/280
|
[] |
camnoalive
| 1
|
ipyflow/ipyflow
|
jupyter
| 22
|
add binder launcher
|
See e.g. [here](https://github.com/jtpio/jupyterlab-cell-flash)
|
closed
|
2020-05-11T01:12:44Z
|
2020-05-12T22:54:34Z
|
https://github.com/ipyflow/ipyflow/issues/22
|
[] |
smacke
| 0
|
BayesWitnesses/m2cgen
|
scikit-learn
| 525
|
The converted C model compiles very slow,how can i fix it?
|
open
|
2022-06-17T07:29:37Z
|
2022-06-17T07:29:37Z
|
https://github.com/BayesWitnesses/m2cgen/issues/525
|
[] |
miszuoer
| 0
|
|
python-visualization/folium
|
data-visualization
| 1,701
|
On click circle radius changes not available
|
**Is your feature request related to a problem? Please describe.**
After adding the folium.Circle feature to the map I just get circles defined by radius. Having them several, they overlap each other when close enough and it seems like there is no option for switching them on/off by clicking on the marker too, which the circle has been appended.
The issue in detail has been raised here:
https://stackoverflow.com/questions/74520790/python-folium-circle-not-working-along-with-popup
and here
https://stackoverflow.com/questions/75096366/python-folium-clickformarker-2-functions-dont-collaborate-with-each-other
**Describe the solution you'd like**
I would like to have an option that would allow me to control the folium.Circle appearance is based on the click feature keeping them invisible when the cursor is out of the circle range.
**Describe alternatives you've considered**
Tried to apply class Circle(folium.ClickForMarker):
It doesn't work correctly. Despite if statement it appears for all the cases at once. Moreover, it remains in conflict with other classes ClickForOneMarker(folium.ClickForMarker): making it completely unusable.
**Additional context**
Provided in the link above
**Implementation**
The way of implementation should be creating a new option allowing using the on-click operation for folium.Circle feature.
|
closed
|
2023-01-13T09:31:00Z
|
2023-02-17T10:47:52Z
|
https://github.com/python-visualization/folium/issues/1701
|
[] |
Krukarius
| 1
|
CPJKU/madmom
|
numpy
| 175
|
madmom.features.notes module incomplete
|
There should be something like a `NoteOnsetProcessor` to pick the note onsets from a 2-d note activation function. The respective functionality could then be removed from `madmom.features.onsets.PeakPickingProcessor`. Be aware that this processor returns MIDI note numbers right now which is quite counter-intuitive.
Furthermore a `NoteTrackingProcessor` should be created which not only detects the note onsets but also the length (and velocity) of the notes.
|
closed
|
2016-07-20T09:56:45Z
|
2017-03-02T07:42:43Z
|
https://github.com/CPJKU/madmom/issues/175
|
[] |
superbock
| 0
|
vitalik/django-ninja
|
django
| 430
|
[BUG] Schema defaults when using Pagination
|
**The bug**
It seems that when I'm setting default values with Field() Pagination does not work as expected.
In this case: it returns the default values of the schema (Bob, age 21)
**Versions**
- Python version: [3.10]
- Django version: [4.0.4]
- Django-Ninja version: [0.17.0]
**How to reproduce:**
Use the following code snippets or [clone my repo here.](https://github.com/florisgravendeel/DjangoNinjaPaginationBug)
models.py
```python
from django.db import models
class Person(models.Model):
id = models.AutoField(primary_key=True)
name = models.CharField(null=False, blank=False, max_length=30)
age = models.PositiveSmallIntegerField(null=False, blank=False)
def __str__(self):
return self.name + ", age: " + str(self.age) + " id: " + str(self.id)
```
api.py
```python
from typing import List
from ninja import Schema, Field, Router
from ninja.pagination import PageNumberPagination, paginate
from DjangoNinjaPaginationBug.models import Person
router = Router()
class PersonOutBugged(Schema):
id: int = Field(1, alias="Short summary of the variable.")
name: str = Field("Bob", alias="Second short summary of the variable.")
age: str = Field(21, alias="Third summary of the variable.")
class PersonOutWorks(Schema):
id: int
name: str
age: str
@router.get("/person-bugged", response=List[PersonOutBugged], tags=["Person"])
@paginate(PageNumberPagination)
def list_persons(request):
"""Retrieves all persons from the database. Bugged. Returns the default values of the schema. """
return Person.objects.all()
@router.get("/person-works", response=List[PersonOutWorks], tags=["Person"])
@paginate(PageNumberPagination)
def list_persons2(request):
"""Retrieves all persons from the database. This works. """
return Person.objects.all()
```
Don't forget to add persons to the database (if you're using the code snippets)!
|
closed
|
2022-04-25T14:40:20Z
|
2022-04-25T16:15:07Z
|
https://github.com/vitalik/django-ninja/issues/430
|
[] |
florisgravendeel
| 2
|
ploomber/ploomber
|
jupyter
| 461
|
Document command tester
|
closed
|
2022-01-03T19:02:56Z
|
2022-01-04T04:13:18Z
|
https://github.com/ploomber/ploomber/issues/461
|
[] |
edublancas
| 0
|
|
tqdm/tqdm
|
jupyter
| 855
|
Merge to progressbar2
|
- [x] I have marked all applicable categories:
+ [x] new feature request
- [x] I have visited the [source website], and in particular
read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
[progressbar2](https://github.com/WoLpH/python-progressbar) has a concept of widgets. It allows library user to set up what is to be shown.
I guess it may make sense to merge the 2 projects.
@WoLpH
|
open
|
2019-11-30T07:40:11Z
|
2019-12-02T23:03:56Z
|
https://github.com/tqdm/tqdm/issues/855
|
[
"question/docs ‽",
"p4-enhancement-future 🧨",
"submodule ⊂"
] |
KOLANICH
| 11
|
streamlit/streamlit
|
python
| 10,603
|
Rerun fragment from anywhere, not just from within itself
|
### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
Related to #8511 and #10045.
Currently the only way to rerun a fragment is calling `st.rerun(scope="fragment")` from within itself. Allowing a fragment rerun to be triggered from anywhere (another fragment or the main app) would unlock new powerful use-cases.
### Why?
When a fragment is dependent on another fragment or on a change on the main app, the only current way to reflect that dependency is to rerun the full application whenever a dependency changes.
### How?
I think adding a key to each fragment would allow the user to build his custom logic to manage the dependency chain and take full control on when to run a given fragment:
```python
@st.fragment(key="depends_on_b")
def a():
st.write(st.session_state.input_for_a)
@st.fragment(key="depends_on_main")
def b():
st.session_state.input_for_a = ...
if st.button("Should rerun a"):
st.rerun(scope="fragment", key="depends_on_b")
if other_condition:
st.rerun(scope="fragment", key="depends_on_main")
```
This implementation doesn't go the full mile to describe the dependency chain in the fragment's definition and let streamlit handle the rerun logic as suggested in #10045, but provides more flexibility for the user to rerun a fragment from anywhere and under any conditions that fits his use-case
### Additional Context
_No response_
|
open
|
2025-03-03T14:55:41Z
|
2025-03-13T04:44:33Z
|
https://github.com/streamlit/streamlit/issues/10603
|
[
"type:enhancement",
"feature:st.fragment"
] |
Abdelgha-4
| 1
|
ultrafunkamsterdam/undetected-chromedriver
|
automation
| 1,758
|
InvalidArgumentException
|
this is my code
import undetected_chromedriver as uc
options = uc.ChromeOptions()
driver = uc.Chrome()
if __name__ == '__main__':
page = driver.get(url='www.google.com')
print(page.title)
and this is the error message
python app.py
Traceback (most recent call last):
File "C:\Users\user\Desktop\zalando\farfetch\app.py", line 8, in <module>
page = driver.get(url='www.google.com')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\user\Desktop\zalando\farfetch\venv\Lib\site-packages\undetected_chromedriver\__init__.py", line 665, in get
return super().get(url)
^^^^^^^^^^^^^^^^
File "C:\Users\user\Desktop\zalando\farfetch\venv\Lib\site-packages\selenium\webdriver\remote\webdriver.py", line 356, in get
self.execute(Command.GET, {"url": url})
File "C:\Users\user\Desktop\zalando\farfetch\venv\Lib\site-packages\selenium\webdriver\remote\webdriver.py", line 347, in execute
self.error_handler.check_response(response)
File "C:\Users\user\Desktop\zalando\farfetch\venv\Lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 229, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.InvalidArgumentException: Message: invalid argument
(Session info: chrome=122.0.6261.58)
Stacktrace:
GetHandleVerifier [0x00BE8CD3+51395]
(No symbol) [0x00B55F31]
(No symbol) [0x00A0E004]
(No symbol) [0x009FF161]
(No symbol) [0x009FD9DE]
(No symbol) [0x009FE0AB]
(No symbol) [0x00A10258]
(No symbol) [0x00A7AC91]
(No symbol) [0x00A63E8C]
(No symbol) [0x00A7A570]
(No symbol) [0x00A63C26]
(No symbol) [0x00A3C629]
(No symbol) [0x00A3D40D]
GetHandleVerifier [0x00F66493+3711107]
GetHandleVerifier [0x00FA587A+3970154]
GetHandleVerifier [0x00FA0B68+3950424]
GetHandleVerifier [0x00C99CD9+776393]
(No symbol) [0x00B61704]
(No symbol) [0x00B5C5E8]
(No symbol) [0x00B5C799]
(No symbol) [0x00B4DDC0]
BaseThreadInitThunk [0x76CC00F9+25]
RtlGetAppContainerNamedObjectPath [0x77B47BBE+286]
RtlGetAppContainerNamedObjectPath [0x77B47B8E+238]
Exception ignored in: <function Chrome.__del__ at 0x0000027EF89C8540>
Traceback (most recent call last):
File "C:\Users\user\Desktop\zalando\farfetch\venv\Lib\site-packages\undetected_chromedriver\__init__.py", line 843, in __del__
File "C:\Users\user\Desktop\zalando\farfetch\venv\Lib\site-packages\undetected_chromedriver\__init__.py", line 798, in quit
OSError: [WinError 6] The handle is invalid
How can i solve this?
|
open
|
2024-02-23T08:04:44Z
|
2024-02-23T08:05:20Z
|
https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1758
|
[] |
idyweb
| 0
|
mirumee/ariadne-codegen
|
graphql
| 30
|
Handle fragments in provided queries
|
Given example schema:
```gql
schema {
query: Query
}
type Query {
testQuery: ResultType
}
type ResultType {
field1: Int!
field2: String!
field3: Boolean!
}
```
and query with fragment:
```gql
query QueryName {
testQuery {
...TestFragment
}
}
fragment TestFragment on ResultType {
field1
field2
}
```
`graphql-sdk-gen` for now raises `KeyError: 'TestFragment'`, but should generate `query_name.py` file with model definition:
```python
class QueryNameResultType(BaseModel):
field1: int
field2: str
```
|
closed
|
2022-11-15T10:22:17Z
|
2022-11-17T07:35:52Z
|
https://github.com/mirumee/ariadne-codegen/issues/30
|
[] |
mat-sop
| 0
|
dnouri/nolearn
|
scikit-learn
| 191
|
Update Version of Lasagne Required in requirements.txt
|
closed
|
2016-01-13T18:19:13Z
|
2016-01-14T21:03:43Z
|
https://github.com/dnouri/nolearn/issues/191
|
[] |
cancan101
| 0
|
|
python-visualization/folium
|
data-visualization
| 1,384
|
search plugin display multiple places
|
**Describe the issue**
I display places where certain historic persons appear at different places on a stamen map. They are coded in geojson. Search works fine: it shows found places in search combo box and I after selecting one value it zooms to that place on the map.
My question:
Is it possible after search to display not only one found place, but all places where the name was found?
cheers
Uli
|
closed
|
2020-08-30T10:49:38Z
|
2022-11-28T12:36:10Z
|
https://github.com/python-visualization/folium/issues/1384
|
[] |
uli22
| 1
|
ContextLab/hypertools
|
data-visualization
| 51
|
demo scripts: naming and paths
|
all of the demo scripts have names that start with `hypertools_demo-`, which is redundant. i propose removing `hypertools_demo-` from each script name.
also, some of the paths are relative rather than absolute. for example, the `sample_data` folder is only visible within the `examples` folder, but it is referenced using relative paths. this causes some of the demo functions (e.g. `hypertools_demo-align.py`) to fail. references to `sample_data` could be changed as follows (from within any scripts that reference data in `sample_data`):
`import os`
`datadir = os.path.join(os.path.realpath(__file__), 'sample_data')`
|
closed
|
2017-01-02T23:27:43Z
|
2017-01-03T02:55:08Z
|
https://github.com/ContextLab/hypertools/issues/51
|
[
"bug"
] |
jeremymanning
| 2
|
aiortc/aiortc
|
asyncio
| 1,102
|
Connection(0) ICE failed when using mobile Internet. Connection(0) ICE completed in local network.
|
I use the latest (1.8.0) version of aiortc. The "server" example (aiortc/tree/main/examples/server) works well on a local network and with mobile internet. My project uses django-channels for "offer/answer" message exchange, and a connection establishes perfectly when I use a local network (both computers on the same Wi-Fi network). However, when I try to connect to the server from my smartphone (Android 12, Chrome browser) using a mobile network, the output waits for several seconds and then shows "Connection(0) ICE failed".
I examined the source code of "venv311/Lib/site-packages/aioice/ice.py" and found that the initial list of Pairs (self._check_list) is extended by this call: self.check_incoming(message, addr, protocol) - line 1062 in the request_received() method of the Connection class.
It's a crucial part because it allows the server to know the dynamic IP address of the remote client.
For the "server" example and my local setup, it works well, and the remote peer sends the needed request, triggering request_received() but for mobile internet request_received() is not called.
Could you suggest any ideas about what I might have done wrong? Or maybe you can suggest things which impact this situation.
Thanks in advance
|
closed
|
2024-05-23T12:25:46Z
|
2024-05-29T22:20:35Z
|
https://github.com/aiortc/aiortc/issues/1102
|
[] |
Aroxed
| 0
|
CTFd/CTFd
|
flask
| 2,103
|
backport core theme for compatibility with new plugins
|
During updating of our plugins for compatibility with the currently developed theme It would be beneficial to move certain functions from the plugins to ctfd.js. It's working fine with the new theme however the current core theme needs to be backported for compatibility.
|
closed
|
2022-04-28T11:36:34Z
|
2022-04-29T04:17:12Z
|
https://github.com/CTFd/CTFd/issues/2103
|
[] |
MilyMilo
| 0
|
AirtestProject/Airtest
|
automation
| 706
|
红米8aJAVACAP截图不正确
|
**(重要!问题分类)**
* 图像识别、设备控制相关问题
**描述问题bug**
JAVACAP在所有横屏应用下返回的图像方向正确但尺寸不正确,就像是把一块横着放的海绵塞进了竖着放的盒子里
JAVACAP返回的图片是经过压缩编码的,这个被编码的图片就是错误的
**相关截图**


**复现步骤**
在IDE里显示的就不正确,简化后就是以下代码
```python
from airtest.core.android.android import Android
from airtest.core.android.constant import CAP_METHOD,ORI_METHOD
a=Android(cap_method=CAP_METHOD.JAVACAP,ori_method=ORI_METHOD.ADB)
import cv2
cv2.imshow('',a.snapshot())
```
**预期效果**
使用MINICAP会报错,使用ADB能得到正确的截图
**python 版本:**
python3.7
**airtest 版本:**
1.1.3
**设备:**
- 型号: 红米redmi8a
- 系统: MIUI(11.0.7)(android 9)
**其他相关环境信息**
win10 1909,python3.7,airtest1.1.3,adb1.0.40,Android.sdk_version=28
|
closed
|
2020-03-16T06:50:22Z
|
2020-07-07T08:11:09Z
|
https://github.com/AirtestProject/Airtest/issues/706
|
[
"to be released"
] |
hgjazhgj
| 8
|
mwaskom/seaborn
|
pandas
| 3,591
|
Heatmap doees not display all entries
|
Hi all,
I have an issue with my heatmap.
I generated a dataframe with 30k columns and here I set some of the values to a non-nan value (2k of them) (some might be double hits but that is beside the point). The values I fill the dataframe with are values between 0-1 to tell the function how to color each sample
When I plot this, I only get a low amount of hits displayed and wondered why this is.
In my real case example what is shown is even less (more rows, less non-nan's) as in this example.
Am I doing something wrong here?
python=3.12; seaborn=0.12; matplotlib=3.8.2; pandas=2.1.4 (Ubuntu=22.04)
```
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib as mpl
import numpy as np
import os
import pandas as pd
new_dict = {}
for k in ["a", "b", "c", "d"]:
v = {str(idx): None for idx in range(30000)}
rand_ints = [np.random.randint(low=0, high=30000) for i in range(2000)]
for v_hits in rand_ints:
v[str(v_hits)] = v_hits/30000
new_dict[k] = v
df_heatmap_hits = pd.DataFrame(new_dict).transpose()
sns.heatmap(df_heatmap_hits)
plt.show()
```
|
closed
|
2023-12-11T11:45:03Z
|
2023-12-11T16:34:24Z
|
https://github.com/mwaskom/seaborn/issues/3591
|
[] |
dansteiert
| 5
|
sqlalchemy/sqlalchemy
|
sqlalchemy
| 12,441
|
Remove features deprecated in 1.3 and earlier
|
Compation to #12437 that covers the other deprecated features
|
closed
|
2025-03-17T20:01:01Z
|
2025-03-18T16:12:31Z
|
https://github.com/sqlalchemy/sqlalchemy/issues/12441
|
[
"deprecations"
] |
CaselIT
| 1
|
3b1b/manim
|
python
| 1,167
|
[Hiring] Need help creating animation
|
Hello,
I have a simple project which requires a simple animation and though manim would be perfect. Since there is a time contraint, I won't be able to create an animation. Would anyone be willing to for hire?
|
closed
|
2020-07-14T17:59:48Z
|
2020-08-18T03:44:19Z
|
https://github.com/3b1b/manim/issues/1167
|
[] |
OGALI
| 3
|
ResidentMario/geoplot
|
matplotlib
| 69
|
Failing voronoi example with the new 0.2.2 release
|
The geoplot release seems to have broken the geopandas examples (the voronoi one). I am getting the following error on our readthedocs build:
```
Unexpected failing examples:
/home/docs/checkouts/readthedocs.org/user_builds/geopandas/checkouts/latest/examples/plotting_with_geoplot.py failed leaving traceback:
Traceback (most recent call last):
File "/home/docs/checkouts/readthedocs.org/user_builds/geopandas/checkouts/latest/examples/plotting_with_geoplot.py", line 80, in <module>
linewidth=0)
File "/home/docs/checkouts/readthedocs.org/user_builds/geopandas/conda/latest/lib/python3.6/site-packages/geoplot/geoplot.py", line 2133, in voronoi
geoms = _build_voronoi_polygons(df)
File "/home/docs/checkouts/readthedocs.org/user_builds/geopandas/conda/latest/lib/python3.6/site-packages/geoplot/geoplot.py", line 2687, in _build_voronoi_polygons
ls = np.vstack([np.asarray(infinite_segments), np.asarray(finite_segments)])
File "/home/docs/checkouts/readthedocs.org/user_builds/geopandas/conda/latest/lib/python3.6/site-packages/numpy/core/shape_base.py", line 234, in vstack
return _nx.concatenate([atleast_2d(_m) for _m in tup], 0)
ValueError: all the input arrays must have same number of dimensions
```
|
closed
|
2019-01-07T14:10:45Z
|
2019-03-17T04:24:29Z
|
https://github.com/ResidentMario/geoplot/issues/69
|
[] |
jorisvandenbossche
| 12
|
CorentinJ/Real-Time-Voice-Cloning
|
python
| 394
|
Is there any way to change the language?
|
Hey, i have successfully installed it,
And I just wanted to ask if there was any way to
Change the pronunciation language to German?
|
closed
|
2020-07-02T10:16:08Z
|
2020-07-04T15:03:11Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/394
|
[] |
ozanaaslan
| 2
|
WZMIAOMIAO/deep-learning-for-image-processing
|
deep-learning
| 572
|
关于MobileNetV3的训练问题
|
您好,我在使用MobileNetV3进行自定义数据集训练时,调用
`net = MobileNetV3(num_classes=16)`
时报错如下:
> TypeError: __init__() missing 2 required positional arguments: 'inverted_residual_setting' and 'last_channel'
请问如何解决?感谢。
|
closed
|
2022-06-13T06:52:35Z
|
2022-06-15T02:38:15Z
|
https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/572
|
[] |
myfeet2cold
| 1
|
dmlc/gluon-cv
|
computer-vision
| 1,032
|
NMS code question
|
I am a fresher of gluon-cv, I want to know how you implement nms in ssd model.
Anyone can tell me where the code is? I see there is a tvm convert tutorials of gluon-cv ssd models which includes nms and post-processing operations which can be converted to tvm model directly, I want to know how it works so as to use it in pytorch detection model to conver to tvm.
Thank you very much.
Best,
Edward
|
closed
|
2019-11-07T09:03:23Z
|
2019-12-19T06:49:49Z
|
https://github.com/dmlc/gluon-cv/issues/1032
|
[] |
Edwardmark
| 3
|
InstaPy/InstaPy
|
automation
| 6,452
|
Ubuntu server // Hide Selenium Extension: error
|
I use Droplet on Digital Ocean _(Ubuntu 20.04 (LTS) x64)_
When I start my quickstart.py, it's fail cause "Hide Selenium Extension: error", I don't found solution for this time, somebody can help me ?
> InstaPy Version: 0.6.15
._. ._. ._. ._. ._. ._. ._.
Workspace in use: "/root/InstaPy"
OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
INFO [2022-01-03 23:29:27] [xxxxx] Session started!
oooooooooooooooooooooooooooooooooooooooooooooooooooooo
INFO [2022-01-03 23:29:27] [xxxxx] -- Connection Checklist [1/2] (Internet Connection Status)
INFO [2022-01-03 23:29:28] [xxxxx] - Internet Connection Status: ok
INFO [2022-01-03 23:29:28] [xxxxx] - Current IP is "64.227.120.175" and it's from "Germany/DE"
INFO [2022-01-03 23:29:28] [xxxxx] -- Connection Checklist [2/2] (Hide Selenium Extension)
INFO [2022-01-03 23:29:28] [xxxxx] - window.navigator.webdriver response: True
WARNING [2022-01-03 23:29:28] [xxxxx] - Hide Selenium Extension: error
INFO [2022-01-03 23:29:31] [xxxxx] - Cookie file not found, creating cookie...
WARNING [2022-01-03 23:29:45] [xxxxx] Login A/B test detected! Trying another string...
WARNING [2022-01-03 23:29:50] [xxxxx] Could not pass the login A/B test. Trying last string...
......................................................................................................................
CRITICAL [2022-01-03 23:30:24] [xxxxx] Unable to login to Instagram! You will find more information in the logs above.
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
INFO [2022-01-03 23:30:25] [xxxxx] Sessional Live Report:
|> No any statistics to show
[Session lasted 1.04 minutes]
OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
INFO [2022-01-03 23:30:25] [xxxxx] Session ended!
ooooooooooooooooooooooooooooooooooooooooooooooooooooo
|
closed
|
2022-01-03T23:37:58Z
|
2022-02-01T14:54:18Z
|
https://github.com/InstaPy/InstaPy/issues/6452
|
[] |
celestinsoum
| 8
|
modin-project/modin
|
data-science
| 7,409
|
BUG: can't use list of tuples of select multiple columns when columns are multiindex
|
### Modin version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the latest released version of Modin.
- [X] I have confirmed this bug exists on the main branch of Modin. (In order to do this you can follow [this guide](https://modin.readthedocs.io/en/stable/getting_started/installation.html#installing-from-the-github-main-branch).)
### Reproducible Example
```python
data = {
"ix": [1, 2, 1, 1, 2, 2],
"iy": [1, 2, 2, 1, 2, 1],
"col": ["b", "b", "a", "a", "a", "a"],
"col_b": ["x", "y", "x", "y", "x", "y"],
"foo": [7, 1, 0, 1, 2, 2],
"bar": [9, 4, 0, 2, 0, 0],
}
pivot_modin = mpd.DataFrame(data).pivot_table(
values=['foo'],
index=['ix'],
columns=['col'],
aggfunc='min',
margins=False,
observed=True,
)
pivot_modin.loc[:, [('foo', 'b')]]
```
### Issue Description
this raises
```
KeyError Traceback (most recent call last)
File /opt/conda/lib/python3.10/site-packages/pandas/core/indexes/base.py:3805, in Index.get_loc(self, key)
3804 try:
-> 3805 return self._engine.get_loc(casted_key)
3806 except KeyError as err:
File index.pyx:167, in pandas._libs.index.IndexEngine.get_loc()
File index.pyx:196, in pandas._libs.index.IndexEngine.get_loc()
File pandas/_libs/hashtable_class_helper.pxi:7081, in pandas._libs.hashtable.PyObjectHashTable.get_item()
File pandas/_libs/hashtable_class_helper.pxi:7089, in pandas._libs.hashtable.PyObjectHashTable.get_item()
KeyError: ('foo', 'b')
The above exception was the direct cause of the following exception:
KeyError Traceback (most recent call last)
File /opt/conda/lib/python3.10/site-packages/pandas/core/indexes/multi.py:3499, in MultiIndex.get_locs(self, seq)
3498 try:
-> 3499 lvl_indexer = self._get_level_indexer(k, level=i, indexer=indexer)
3500 except (InvalidIndexError, TypeError, KeyError) as err:
3501 # InvalidIndexError e.g. non-hashable, fall back to treating
3502 # this as a sequence of labels
3503 # KeyError it can be ambiguous if this is a label or sequence
3504 # of labels
3505 # github.com/pandas-dev/pandas/issues/39424#issuecomment-871626708
File /opt/conda/lib/python3.10/site-packages/pandas/core/indexes/multi.py:3391, in MultiIndex._get_level_indexer(self, key, level, indexer)
3390 else:
-> 3391 idx = self._get_loc_single_level_index(level_index, key)
3393 if level > 0 or self._lexsort_depth == 0:
3394 # Desired level is not sorted
File /opt/conda/lib/python3.10/site-packages/pandas/core/indexes/multi.py:2980, in MultiIndex._get_loc_single_level_index(self, level_index, key)
2979 else:
-> 2980 return level_index.get_loc(key)
File /opt/conda/lib/python3.10/site-packages/pandas/core/indexes/base.py:3812, in Index.get_loc(self, key)
3811 raise InvalidIndexError(key)
-> 3812 raise KeyError(key) from err
3813 except TypeError:
3814 # If we have a listlike key, _check_indexing_error will raise
3815 # InvalidIndexError. Otherwise we fall through and re-raise
3816 # the TypeError.
KeyError: ('foo', 'b')
During handling of the above exception, another exception occurred:
KeyError Traceback (most recent call last)
File /opt/conda/lib/python3.10/site-packages/pandas/core/indexes/base.py:3805, in Index.get_loc(self, key)
3804 try:
-> 3805 return self._engine.get_loc(casted_key)
3806 except KeyError as err:
File index.pyx:167, in pandas._libs.index.IndexEngine.get_loc()
File index.pyx:196, in pandas._libs.index.IndexEngine.get_loc()
File pandas/_libs/hashtable_class_helper.pxi:7081, in pandas._libs.hashtable.PyObjectHashTable.get_item()
File pandas/_libs/hashtable_class_helper.pxi:7089, in pandas._libs.hashtable.PyObjectHashTable.get_item()
KeyError: 'b'
The above exception was the direct cause of the following exception:
KeyError Traceback (most recent call last)
Cell In[17], line 1
----> 1 pivot_modin.loc[:, [('foo', 'b')]]
File /opt/conda/lib/python3.10/site-packages/modin/logging/logger_decorator.py:144, in enable_logging.<locals>.decorator.<locals>.run_and_log(*args, **kwargs)
129 """
130 Compute function with logging if Modin logging is enabled.
131
(...)
141 Any
142 """
143 if LogMode.get() == "disable":
--> 144 return obj(*args, **kwargs)
146 logger = get_logger()
147 logger.log(log_level, start_line)
File /opt/conda/lib/python3.10/site-packages/modin/pandas/indexing.py:666, in _LocIndexer.__getitem__(self, key)
664 except KeyError:
665 pass
--> 666 return self._helper_for__getitem__(
667 key, *self._parse_row_and_column_locators(key)
668 )
File /opt/conda/lib/python3.10/site-packages/modin/logging/logger_decorator.py:144, in enable_logging.<locals>.decorator.<locals>.run_and_log(*args, **kwargs)
129 """
130 Compute function with logging if Modin logging is enabled.
131
(...)
141 Any
142 """
143 if LogMode.get() == "disable":
--> 144 return obj(*args, **kwargs)
146 logger = get_logger()
147 logger.log(log_level, start_line)
File /opt/conda/lib/python3.10/site-packages/modin/pandas/indexing.py:712, in _LocIndexer._helper_for__getitem__(self, key, row_loc, col_loc, ndim)
709 if isinstance(row_loc, Series) and is_boolean_array(row_loc):
710 return self._handle_boolean_masking(row_loc, col_loc)
--> 712 qc_view = self.qc.take_2d_labels(row_loc, col_loc)
713 result = self._get_pandas_object_from_qc_view(
714 qc_view,
715 row_multiindex_full_lookup,
(...)
719 ndim,
720 )
722 if isinstance(result, Series):
File /opt/conda/lib/python3.10/site-packages/modin/core/storage_formats/pandas/query_compiler_caster.py:157, in apply_argument_cast.<locals>.cast_args(*args, **kwargs)
155 kwargs = cast_nested_args_to_current_qc_type(kwargs, current_qc)
156 args = cast_nested_args_to_current_qc_type(args, current_qc)
--> 157 return obj(*args, **kwargs)
File /opt/conda/lib/python3.10/site-packages/modin/logging/logger_decorator.py:144, in enable_logging.<locals>.decorator.<locals>.run_and_log(*args, **kwargs)
129 """
130 Compute function with logging if Modin logging is enabled.
131
(...)
141 Any
142 """
143 if LogMode.get() == "disable":
--> 144 return obj(*args, **kwargs)
146 logger = get_logger()
147 logger.log(log_level, start_line)
File /opt/conda/lib/python3.10/site-packages/modin/core/storage_formats/base/query_compiler.py:4217, in BaseQueryCompiler.take_2d_labels(self, index, columns)
4197 def take_2d_labels(
4198 self,
4199 index,
4200 columns,
4201 ):
4202 """
4203 Take the given labels.
4204
(...)
4215 Subset of this QueryCompiler.
4216 """
-> 4217 row_lookup, col_lookup = self.get_positions_from_labels(index, columns)
4218 if isinstance(row_lookup, slice):
4219 ErrorMessage.catch_bugs_and_request_email(
4220 failure_condition=row_lookup != slice(None),
4221 extra_log=f"Only None-slices are acceptable as a slice argument in masking, got: {row_lookup}",
4222 )
File /opt/conda/lib/python3.10/site-packages/modin/core/storage_formats/pandas/query_compiler_caster.py:157, in apply_argument_cast.<locals>.cast_args(*args, **kwargs)
155 kwargs = cast_nested_args_to_current_qc_type(kwargs, current_qc)
156 args = cast_nested_args_to_current_qc_type(args, current_qc)
--> 157 return obj(*args, **kwargs)
File /opt/conda/lib/python3.10/site-packages/modin/logging/logger_decorator.py:144, in enable_logging.<locals>.decorator.<locals>.run_and_log(*args, **kwargs)
129 """
130 Compute function with logging if Modin logging is enabled.
131
(...)
141 Any
142 """
143 if LogMode.get() == "disable":
--> 144 return obj(*args, **kwargs)
146 logger = get_logger()
147 logger.log(log_level, start_line)
File /opt/conda/lib/python3.10/site-packages/modin/core/storage_formats/base/query_compiler.py:4314, in BaseQueryCompiler.get_positions_from_labels(self, row_loc, col_loc)
4312 axis_lookup = self.get_axis(axis).get_indexer_for(axis_loc)
4313 else:
-> 4314 axis_lookup = self.get_axis(axis).get_locs(axis_loc)
4315 elif is_boolean_array(axis_loc):
4316 axis_lookup = boolean_mask_to_numeric(axis_loc)
File /opt/conda/lib/python3.10/site-packages/pandas/core/indexes/multi.py:3513, in MultiIndex.get_locs(self, seq)
3509 raise err
3510 # GH 39424: Ignore not founds
3511 # GH 42351: No longer ignore not founds & enforced in 2.0
3512 # TODO: how to handle IntervalIndex level? (no test cases)
-> 3513 item_indexer = self._get_level_indexer(
3514 x, level=i, indexer=indexer
3515 )
3516 if lvl_indexer is None:
3517 lvl_indexer = _to_bool_indexer(item_indexer)
File /opt/conda/lib/python3.10/site-packages/pandas/core/indexes/multi.py:3391, in MultiIndex._get_level_indexer(self, key, level, indexer)
3388 return slice(i, j, step)
3390 else:
-> 3391 idx = self._get_loc_single_level_index(level_index, key)
3393 if level > 0 or self._lexsort_depth == 0:
3394 # Desired level is not sorted
3395 if isinstance(idx, slice):
3396 # test_get_loc_partial_timestamp_multiindex
File /opt/conda/lib/python3.10/site-packages/pandas/core/indexes/multi.py:2980, in MultiIndex._get_loc_single_level_index(self, level_index, key)
2978 return -1
2979 else:
-> 2980 return level_index.get_loc(key)
File /opt/conda/lib/python3.10/site-packages/pandas/core/indexes/base.py:3812, in Index.get_loc(self, key)
3807 if isinstance(casted_key, slice) or (
3808 isinstance(casted_key, abc.Iterable)
3809 and any(isinstance(x, slice) for x in casted_key)
3810 ):
3811 raise InvalidIndexError(key)
-> 3812 raise KeyError(key) from err
3813 except TypeError:
3814 # If we have a listlike key, _check_indexing_error will raise
3815 # InvalidIndexError. Otherwise we fall through and re-raise
3816 # the TypeError.
3817 self._check_indexing_error(key)
KeyError: 'b'
```
### Expected Behavior
what pandas does
```
ix
1 7
2 1
Name: (foo, b), dtype: int64
```
### Error Logs
<details>
```python-traceback
Replace this line with the error backtrace (if applicable).
```
</details>
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 3e951a63084a9cbfd5e73f6f36653ee12d2a2bfa
python : 3.10.14
python-bits : 64
OS : Linux
OS-release : 5.15.154+
Version : #1 SMP Thu Jun 27 20:43:36 UTC 2024
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : POSIX
LANG : C.UTF-8
LOCALE : None.None
Modin dependencies
------------------
modin : 0.32.0
ray : 2.24.0
dask : 2024.9.1
distributed : None
pandas dependencies
-------------------
pandas : 2.2.3
numpy : 1.26.4
pytz : 2024.1
dateutil : 2.9.0.post0
pip : 24.0
Cython : 3.0.10
sphinx : None
IPython : 8.21.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2024.6.1
html5lib : 1.1
hypothesis : None
gcsfs : 2024.6.1
jinja2 : 3.1.4
lxml.etree : 5.3.0
matplotlib : 3.7.5
numba : 0.60.0
numexpr : 2.10.1
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 17.0.0
pyreadstat : None
pytest : 8.3.3
python-calamine : None
pyxlsb : None
s3fs : 2024.6.1
scipy : 1.14.1
sqlalchemy : 2.0.30
tables : 3.10.1
tabulate : 0.9.0
xarray : 2024.9.0
xlrd : None
xlsxwriter : None
zstandard : 0.23.0
tzdata : 2024.1
qtpy : None
pyqt5 : None
</details>
|
open
|
2024-11-13T08:15:15Z
|
2024-11-13T11:17:32Z
|
https://github.com/modin-project/modin/issues/7409
|
[
"bug 🦗",
"Triage 🩹"
] |
MarcoGorelli
| 1
|
recommenders-team/recommenders
|
deep-learning
| 1,813
|
[ASK] Recommendation algorithms leveraging user attributes as input
|
Hi Recommenders,
Thank you for the interesting repository. I came across `recommenders` repository very recently. I am still exploring different algorithms. I have a particular interest to recommendation algorithms leveraging or taking user attributes i.e., gender, age, occupation as input.
### Description
<!--- Describe your general ask in detail -->
In my experiments, I am exploring the impact of user attributes as side information to a recommender system algorithms. So I am looking for a number of algorithms that are available in `recommenders`. Could you please point me / direct me to all algorithms that do take user attributes as input.
Bests,
_Manel._
### Comments
Please note that I have already tried Factorization machine.
|
closed
|
2022-08-18T14:36:28Z
|
2022-10-19T07:58:51Z
|
https://github.com/recommenders-team/recommenders/issues/1813
|
[
"help wanted"
] |
SlokomManel
| 1
|
wandb/wandb
|
data-science
| 9,046
|
[Feature]: Better grouping of runs
|
### Description
Hi,
I would suggest an option to group better the runs.
Let us suppose we have 3 methods with some common (hyper)-parameters such as LR, WD but also they have some specific parameters such as attr_A (values B,C,...), attr_0 (values 1,2,3,...), attr_i (values ii,iii,iv,...)
The problem is to compare the runs not only within the group of specific method, but also globally.
### Suggested Solution
One idea to group them in order to compare not only within the selected method, but also holistically is to group method,LR,WD,attr_A,attr_0,attr_i and where a method does not accept/is independent of some attribute then it is just null.
Currently in wandb it does not work.
So this feature would be related to https://github.com/wandb/wandb/issues/6460
But another, more general idea would be to allow group in a tree manner.
| LR
| WD
| method
|->attr_A
| |-> attr_AA
|->attr_B
|-> attr_C
etc.
Another way to handle this (situation where in our space we have runs from different methods) is to allow to have multiple grouping options saved - just as in filter field. This would at least allow quickly compare runs within the same family of approach, without grouping every single time when we want to switch method between method with attr_A and method with attr_0.
However, so many users would happy with just "null" solution. Many thanks
|
closed
|
2024-12-09T00:31:19Z
|
2024-12-14T00:22:10Z
|
https://github.com/wandb/wandb/issues/9046
|
[
"ty:feature",
"a:app"
] |
matekrk
| 3
|
0xTheProDev/fastapi-clean-example
|
pydantic
| 6
|
question on service-to-service
|
Is it an issue that we expose models directly? For example in BookService we expose model in get method, and we also use models of Author repository.
This implies that we are allowed to use those interfaces - CRUD, read all properties, call other relations, etc.
Is exposing schemas objects between domains is a better solution?
|
closed
|
2023-02-13T10:09:41Z
|
2023-04-25T07:10:49Z
|
https://github.com/0xTheProDev/fastapi-clean-example/issues/6
|
[] |
Murtagy
| 1
|
kensho-technologies/graphql-compiler
|
graphql
| 698
|
register_macro_edge builds a schema for macro definitions every time its called
|
`register_macro_edge` calls `make_macro_edge_descriptor` which calls `get_and_validate_macro_edge_info` which calls `_validate_ast_with_builtin_graphql_validation` which calls `get_schema_for_macro_edge_definitions` which builds the macro edge definition schema. Building the macro edge definition schema involves copying the entire original schema so we probably want to avoid doing this every time we register a macro edge.
We should probably make a MacroRegistry a dataclass and have a `post_init` method that creates a schema for macro edge definitions once.
|
open
|
2019-12-10T03:42:49Z
|
2019-12-10T03:43:30Z
|
https://github.com/kensho-technologies/graphql-compiler/issues/698
|
[
"enhancement"
] |
pmantica1
| 0
|
explosion/spacy-course
|
jupyter
| 31
|
Chapter 4.4 evaluates correct with no entities
|
<img width="859" alt="Screen Shot 2019-05-30 at 1 12 58 am" src="https://user-images.githubusercontent.com/21038129/58568874-6729f480-8278-11e9-8802-730ec6ea13d2.png">
I put `doc.ents` instead of `entities` but it was still marked correct
|
closed
|
2019-05-29T15:16:23Z
|
2020-04-17T01:32:50Z
|
https://github.com/explosion/spacy-course/issues/31
|
[] |
natashawatkins
| 3
|
kubeflow/katib
|
scikit-learn
| 1,581
|
[chore] Upgrade CRDs to apiextensions.k8s.io/v1
|
/kind feature
**Describe the solution you'd like**
[A clear and concise description of what you want to happen.]
> The api group apiextensions.k8s.io/v1beta1 is no longer served in k8s 1.22 https://kubernetes.io/docs/reference/using-api/deprecation-guide/#customresourcedefinition-v122
>
> kubeflow APIs need to be upgraded
/cc @alculquicondor
Maybe there is problem about https://github.com/kubernetes/apiextensions-apiserver/issues/50
**Anything else you would like to add:**
[Miscellaneous information that will assist in solving the issue.]
|
closed
|
2021-07-19T03:27:14Z
|
2021-08-10T23:31:25Z
|
https://github.com/kubeflow/katib/issues/1581
|
[
"help wanted",
"area/operator",
"kind/feature"
] |
gaocegege
| 2
|
python-visualization/folium
|
data-visualization
| 1,942
|
Add example pattern for pulling JsCode snippets from external javascript files
|
**Is your feature request related to a problem? Please describe.**
I have found myself embedding a lot of javascript code via `JsCode` when using the `Realtime` plugin. This gets messy for a few reasons:
- unit testing the javascript code is not possible
- adding leaflet plugins can result in a huge amount of javascript code embedded inside python
- and the code is just not as modular as it could be
I'm wondering if there's a recommended way to solve this.
**Describe the solution you'd like**
A solution would allow one to reference specific functions from external javascript files within a `JsCode` object.
It's not clear to me what would be better: whether to do this from within `folium` (using the API) or to use some external library to populate the string of javascript code that's passed to `JsCode` based on some javascript file and function. In either case, having an example in the `examples` folder would be great.
**Describe alternatives you've considered**
If nothing is changed about the `folium` API, this could just be done external to `folium`. As in, another library interprets a javascript file, and given a function name, the function's definition is returned as a string. Is this preferable to building the functionality into `folium`? If so, does anybody know of an existing library that can already do this?
**Additional context**
n/a
**Implementation**
n/a
|
closed
|
2024-04-30T17:03:24Z
|
2024-05-19T08:46:33Z
|
https://github.com/python-visualization/folium/issues/1942
|
[] |
thomasegriffith
| 9
|
learning-at-home/hivemind
|
asyncio
| 158
|
Optimize the tests
|
Right now, our tests can take upwards of 10 minutes both in CircleCI and locally, which slows down the development workflow and leads to unnecessary context switches. We should find a way to reduce the time requirements and make sure it stays that way.
* Identify and speed up the slow tests. Main culprits: multiple iterations in tests using random samples, large model sizes for DMoE tests.
* All the tests in our CI pipelines run sequentially, which increases the runtime with the number of tests. It is possible to use [pytest-xdist](https://pypi.org/project/pytest-xdist/), since the default executor has 2 cores.
* Add a global timeout to ensure that future tests don't introduce any regressions
|
closed
|
2021-02-28T15:35:48Z
|
2021-08-03T17:43:27Z
|
https://github.com/learning-at-home/hivemind/issues/158
|
[
"ci"
] |
mryab
| 2
|
mwaskom/seaborn
|
data-science
| 3,608
|
0.13.1: test suite needs `husl` module
|
<details>
<summary>Looks like test suite needs husl module</summary>
```console
+ PYTHONPATH=/home/tkloczko/rpmbuild/BUILDROOT/python-seaborn-0.13.1-2.fc35.x86_64/usr/lib64/python3.8/site-packages:/home/tkloczko/rpmbuild/BUILDROOT/python-seaborn-0.13.1-2.fc35.x86_64/usr/lib/python3.8/site-packages
+ /usr/bin/pytest -ra -m 'not network' -p no:cacheprovider
============================= test session starts ==============================
platform linux -- Python 3.8.18, pytest-7.4.4, pluggy-1.3.0
rootdir: /home/tkloczko/rpmbuild/BUILD/seaborn-0.13.1
configfile: pyproject.toml
collected 0 items / 34 errors
==================================== ERRORS ====================================
__________________ ERROR collecting tests/test_algorithms.py ___________________
ImportError while importing test module '/home/tkloczko/rpmbuild/BUILD/seaborn-0.13.1/tests/test_algorithms.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib64/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/test_algorithms.py:6: in <module>
from seaborn import algorithms as algo
seaborn/__init__.py:2: in <module>
from .rcmod import * # noqa: F401,F403
seaborn/rcmod.py:5: in <module>
from . import palettes
seaborn/palettes.py:7: in <module>
import husl
E ModuleNotFoundError: No module named 'husl'
___________________ ERROR collecting tests/test_axisgrid.py ____________________
ImportError while importing test module '/home/tkloczko/rpmbuild/BUILD/seaborn-0.13.1/tests/test_axisgrid.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib64/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/test_axisgrid.py:11: in <module>
from seaborn._base import categorical_order
seaborn/__init__.py:2: in <module>
from .rcmod import * # noqa: F401,F403
seaborn/rcmod.py:5: in <module>
from . import palettes
seaborn/palettes.py:7: in <module>
import husl
E ModuleNotFoundError: No module named 'husl'
_____________________ ERROR collecting tests/test_base.py ______________________
ImportError while importing test module '/home/tkloczko/rpmbuild/BUILD/seaborn-0.13.1/tests/test_base.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib64/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/test_base.py:11: in <module>
from seaborn.axisgrid import FacetGrid
seaborn/__init__.py:2: in <module>
from .rcmod import * # noqa: F401,F403
seaborn/rcmod.py:5: in <module>
from . import palettes
seaborn/palettes.py:7: in <module>
import husl
E ModuleNotFoundError: No module named 'husl'
__________________ ERROR collecting tests/test_categorical.py __________________
ImportError while importing test module '/home/tkloczko/rpmbuild/BUILD/seaborn-0.13.1/tests/test_categorical.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib64/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/test_categorical.py:19: in <module>
from seaborn import categorical as cat
seaborn/__init__.py:2: in <module>
from .rcmod import * # noqa: F401,F403
seaborn/rcmod.py:5: in <module>
from . import palettes
seaborn/palettes.py:7: in <module>
import husl
E ModuleNotFoundError: No module named 'husl'
_________________ ERROR collecting tests/test_distributions.py _________________
ImportError while importing test module '/home/tkloczko/rpmbuild/BUILD/seaborn-0.13.1/tests/test_distributions.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib64/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/test_distributions.py:12: in <module>
from seaborn import distributions as dist
seaborn/__init__.py:2: in <module>
from .rcmod import * # noqa: F401,F403
seaborn/rcmod.py:5: in <module>
from . import palettes
seaborn/palettes.py:7: in <module>
import husl
E ModuleNotFoundError: No module named 'husl'
__________________ ERROR collecting tests/test_docstrings.py ___________________
ImportError while importing test module '/home/tkloczko/rpmbuild/BUILD/seaborn-0.13.1/tests/test_docstrings.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib64/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/test_docstrings.py:1: in <module>
from seaborn._docstrings import DocstringComponents
seaborn/__init__.py:2: in <module>
from .rcmod import * # noqa: F401,F403
seaborn/rcmod.py:5: in <module>
from . import palettes
seaborn/palettes.py:7: in <module>
import husl
E ModuleNotFoundError: No module named 'husl'
____________________ ERROR collecting tests/test_matrix.py _____________________
ImportError while importing test module '/home/tkloczko/rpmbuild/BUILD/seaborn-0.13.1/tests/test_matrix.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib64/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/test_matrix.py:27: in <module>
from seaborn import matrix as mat
seaborn/__init__.py:2: in <module>
from .rcmod import * # noqa: F401,F403
seaborn/rcmod.py:5: in <module>
from . import palettes
seaborn/palettes.py:7: in <module>
import husl
E ModuleNotFoundError: No module named 'husl'
___________________ ERROR collecting tests/test_miscplot.py ____________________
ImportError while importing test module '/home/tkloczko/rpmbuild/BUILD/seaborn-0.13.1/tests/test_miscplot.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib64/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/test_miscplot.py:3: in <module>
from seaborn import miscplot as misc
seaborn/__init__.py:2: in <module>
from .rcmod import * # noqa: F401,F403
seaborn/rcmod.py:5: in <module>
from . import palettes
seaborn/palettes.py:7: in <module>
import husl
E ModuleNotFoundError: No module named 'husl'
____________________ ERROR collecting tests/test_objects.py ____________________
ImportError while importing test module '/home/tkloczko/rpmbuild/BUILD/seaborn-0.13.1/tests/test_objects.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib64/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/test_objects.py:1: in <module>
import seaborn.objects
seaborn/__init__.py:2: in <module>
from .rcmod import * # noqa: F401,F403
seaborn/rcmod.py:5: in <module>
from . import palettes
seaborn/palettes.py:7: in <module>
import husl
E ModuleNotFoundError: No module named 'husl'
___________________ ERROR collecting tests/test_palettes.py ____________________
ImportError while importing test module '/home/tkloczko/rpmbuild/BUILD/seaborn-0.13.1/tests/test_palettes.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib64/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/test_palettes.py:8: in <module>
from seaborn import palettes, utils, rcmod
seaborn/__init__.py:2: in <module>
from .rcmod import * # noqa: F401,F403
seaborn/rcmod.py:5: in <module>
from . import palettes
seaborn/palettes.py:7: in <module>
import husl
E ModuleNotFoundError: No module named 'husl'
_____________________ ERROR collecting tests/test_rcmod.py _____________________
ImportError while importing test module '/home/tkloczko/rpmbuild/BUILD/seaborn-0.13.1/tests/test_rcmod.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib64/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/test_rcmod.py:7: in <module>
from seaborn import rcmod, palettes, utils
seaborn/__init__.py:2: in <module>
from .rcmod import * # noqa: F401,F403
seaborn/rcmod.py:5: in <module>
from . import palettes
seaborn/palettes.py:7: in <module>
import husl
E ModuleNotFoundError: No module named 'husl'
__________________ ERROR collecting tests/test_regression.py ___________________
ImportError while importing test module '/home/tkloczko/rpmbuild/BUILD/seaborn-0.13.1/tests/test_regression.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib64/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/test_regression.py:18: in <module>
from seaborn import regression as lm
seaborn/__init__.py:2: in <module>
from .rcmod import * # noqa: F401,F403
seaborn/rcmod.py:5: in <module>
from . import palettes
seaborn/palettes.py:7: in <module>
import husl
E ModuleNotFoundError: No module named 'husl'
__________________ ERROR collecting tests/test_relational.py ___________________
ImportError while importing test module '/home/tkloczko/rpmbuild/BUILD/seaborn-0.13.1/tests/test_relational.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib64/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/test_relational.py:12: in <module>
from seaborn.palettes import color_palette
seaborn/__init__.py:2: in <module>
from .rcmod import * # noqa: F401,F403
seaborn/rcmod.py:5: in <module>
from . import palettes
seaborn/palettes.py:7: in <module>
import husl
E ModuleNotFoundError: No module named 'husl'
__________________ ERROR collecting tests/test_statistics.py ___________________
ImportError while importing test module '/home/tkloczko/rpmbuild/BUILD/seaborn-0.13.1/tests/test_statistics.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib64/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/test_statistics.py:12: in <module>
from seaborn._statistics import (
seaborn/__init__.py:2: in <module>
from .rcmod import * # noqa: F401,F403
seaborn/rcmod.py:5: in <module>
from . import palettes
seaborn/palettes.py:7: in <module>
import husl
E ModuleNotFoundError: No module named 'husl'
_____________________ ERROR collecting tests/test_utils.py _____________________
ImportError while importing test module '/home/tkloczko/rpmbuild/BUILD/seaborn-0.13.1/tests/test_utils.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib64/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/test_utils.py:23: in <module>
from seaborn import utils, rcmod, scatterplot
seaborn/__init__.py:2: in <module>
from .rcmod import * # noqa: F401,F403
seaborn/rcmod.py:5: in <module>
from . import palettes
seaborn/palettes.py:7: in <module>
import husl
E ModuleNotFoundError: No module named 'husl'
__________________ ERROR collecting tests/_core/test_data.py ___________________
ImportError while importing test module '/home/tkloczko/rpmbuild/BUILD/seaborn-0.13.1/tests/_core/test_data.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib64/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/_core/test_data.py:9: in <module>
from seaborn._core.data import PlotData
seaborn/__init__.py:2: in <module>
from .rcmod import * # noqa: F401,F403
seaborn/rcmod.py:5: in <module>
from . import palettes
seaborn/palettes.py:7: in <module>
import husl
E ModuleNotFoundError: No module named 'husl'
_________________ ERROR collecting tests/_core/test_groupby.py _________________
ImportError while importing test module '/home/tkloczko/rpmbuild/BUILD/seaborn-0.13.1/tests/_core/test_groupby.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib64/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/_core/test_groupby.py:8: in <module>
from seaborn._core.groupby import GroupBy
seaborn/__init__.py:2: in <module>
from .rcmod import * # noqa: F401,F403
seaborn/rcmod.py:5: in <module>
from . import palettes
seaborn/palettes.py:7: in <module>
import husl
E ModuleNotFoundError: No module named 'husl'
__________________ ERROR collecting tests/_core/test_moves.py __________________
ImportError while importing test module '/home/tkloczko/rpmbuild/BUILD/seaborn-0.13.1/tests/_core/test_moves.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib64/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/_core/test_moves.py:9: in <module>
from seaborn._core.moves import Dodge, Jitter, Shift, Stack, Norm
seaborn/__init__.py:2: in <module>
from .rcmod import * # noqa: F401,F403
seaborn/rcmod.py:5: in <module>
from . import palettes
seaborn/palettes.py:7: in <module>
import husl
E ModuleNotFoundError: No module named 'husl'
__________________ ERROR collecting tests/_core/test_plot.py ___________________
ImportError while importing test module '/home/tkloczko/rpmbuild/BUILD/seaborn-0.13.1/tests/_core/test_plot.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib64/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/_core/test_plot.py:17: in <module>
from seaborn._core.plot import Plot, PlotConfig, Default
seaborn/__init__.py:2: in <module>
from .rcmod import * # noqa: F401,F403
seaborn/rcmod.py:5: in <module>
from . import palettes
seaborn/palettes.py:7: in <module>
import husl
E ModuleNotFoundError: No module named 'husl'
_______________ ERROR collecting tests/_core/test_properties.py ________________
ImportError while importing test module '/home/tkloczko/rpmbuild/BUILD/seaborn-0.13.1/tests/_core/test_properties.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib64/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/_core/test_properties.py:11: in <module>
from seaborn._core.rules import categorical_order
seaborn/__init__.py:2: in <module>
from .rcmod import * # noqa: F401,F403
seaborn/rcmod.py:5: in <module>
from . import palettes
seaborn/palettes.py:7: in <module>
import husl
E ModuleNotFoundError: No module named 'husl'
__________________ ERROR collecting tests/_core/test_rules.py __________________
ImportError while importing test module '/home/tkloczko/rpmbuild/BUILD/seaborn-0.13.1/tests/_core/test_rules.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib64/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/_core/test_rules.py:7: in <module>
from seaborn._core.rules import (
seaborn/__init__.py:2: in <module>
from .rcmod import * # noqa: F401,F403
seaborn/rcmod.py:5: in <module>
from . import palettes
seaborn/palettes.py:7: in <module>
import husl
E ModuleNotFoundError: No module named 'husl'
_________________ ERROR collecting tests/_core/test_scales.py __________________
ImportError while importing test module '/home/tkloczko/rpmbuild/BUILD/seaborn-0.13.1/tests/_core/test_scales.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib64/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/_core/test_scales.py:11: in <module>
from seaborn._core.plot import Plot
seaborn/__init__.py:2: in <module>
from .rcmod import * # noqa: F401,F403
seaborn/rcmod.py:5: in <module>
from . import palettes
seaborn/palettes.py:7: in <module>
import husl
E ModuleNotFoundError: No module named 'husl'
________________ ERROR collecting tests/_core/test_subplots.py _________________
ImportError while importing test module '/home/tkloczko/rpmbuild/BUILD/seaborn-0.13.1/tests/_core/test_subplots.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib64/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/_core/test_subplots.py:6: in <module>
from seaborn._core.subplots import Subplots
seaborn/__init__.py:2: in <module>
from .rcmod import * # noqa: F401,F403
seaborn/rcmod.py:5: in <module>
from . import palettes
seaborn/palettes.py:7: in <module>
import husl
E ModuleNotFoundError: No module named 'husl'
__________________ ERROR collecting tests/_marks/test_area.py __________________
ImportError while importing test module '/home/tkloczko/rpmbuild/BUILD/seaborn-0.13.1/tests/_marks/test_area.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib64/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/_marks/test_area.py:7: in <module>
from seaborn._core.plot import Plot
seaborn/__init__.py:2: in <module>
from .rcmod import * # noqa: F401,F403
seaborn/rcmod.py:5: in <module>
from . import palettes
seaborn/palettes.py:7: in <module>
import husl
E ModuleNotFoundError: No module named 'husl'
__________________ ERROR collecting tests/_marks/test_bar.py ___________________
ImportError while importing test module '/home/tkloczko/rpmbuild/BUILD/seaborn-0.13.1/tests/_marks/test_bar.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib64/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/_marks/test_bar.py:9: in <module>
from seaborn._core.plot import Plot
seaborn/__init__.py:2: in <module>
from .rcmod import * # noqa: F401,F403
seaborn/rcmod.py:5: in <module>
from . import palettes
seaborn/palettes.py:7: in <module>
import husl
E ModuleNotFoundError: No module named 'husl'
__________________ ERROR collecting tests/_marks/test_base.py __________________
ImportError while importing test module '/home/tkloczko/rpmbuild/BUILD/seaborn-0.13.1/tests/_marks/test_base.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib64/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/_marks/test_base.py:10: in <module>
from seaborn._marks.base import Mark, Mappable, resolve_color
seaborn/__init__.py:2: in <module>
from .rcmod import * # noqa: F401,F403
seaborn/rcmod.py:5: in <module>
from . import palettes
seaborn/palettes.py:7: in <module>
import husl
E ModuleNotFoundError: No module named 'husl'
__________________ ERROR collecting tests/_marks/test_dot.py ___________________
ImportError while importing test module '/home/tkloczko/rpmbuild/BUILD/seaborn-0.13.1/tests/_marks/test_dot.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib64/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/_marks/test_dot.py:6: in <module>
from seaborn.palettes import color_palette
seaborn/__init__.py:2: in <module>
from .rcmod import * # noqa: F401,F403
seaborn/rcmod.py:5: in <module>
from . import palettes
seaborn/palettes.py:7: in <module>
import husl
E ModuleNotFoundError: No module named 'husl'
__________________ ERROR collecting tests/_marks/test_line.py __________________
ImportError while importing test module '/home/tkloczko/rpmbuild/BUILD/seaborn-0.13.1/tests/_marks/test_line.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib64/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/_marks/test_line.py:8: in <module>
from seaborn._core.plot import Plot
seaborn/__init__.py:2: in <module>
from .rcmod import * # noqa: F401,F403
seaborn/rcmod.py:5: in <module>
from . import palettes
seaborn/palettes.py:7: in <module>
import husl
E ModuleNotFoundError: No module named 'husl'
__________________ ERROR collecting tests/_marks/test_text.py __________________
ImportError while importing test module '/home/tkloczko/rpmbuild/BUILD/seaborn-0.13.1/tests/_marks/test_text.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib64/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/_marks/test_text.py:8: in <module>
from seaborn._core.plot import Plot
seaborn/__init__.py:2: in <module>
from .rcmod import * # noqa: F401,F403
seaborn/rcmod.py:5: in <module>
from . import palettes
seaborn/palettes.py:7: in <module>
import husl
E ModuleNotFoundError: No module named 'husl'
______________ ERROR collecting tests/_stats/test_aggregation.py _______________
ImportError while importing test module '/home/tkloczko/rpmbuild/BUILD/seaborn-0.13.1/tests/_stats/test_aggregation.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib64/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/_stats/test_aggregation.py:8: in <module>
from seaborn._core.groupby import GroupBy
seaborn/__init__.py:2: in <module>
from .rcmod import * # noqa: F401,F403
seaborn/rcmod.py:5: in <module>
from . import palettes
seaborn/palettes.py:7: in <module>
import husl
E ModuleNotFoundError: No module named 'husl'
________________ ERROR collecting tests/_stats/test_counting.py ________________
ImportError while importing test module '/home/tkloczko/rpmbuild/BUILD/seaborn-0.13.1/tests/_stats/test_counting.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib64/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/_stats/test_counting.py:8: in <module>
from seaborn._core.groupby import GroupBy
seaborn/__init__.py:2: in <module>
from .rcmod import * # noqa: F401,F403
seaborn/rcmod.py:5: in <module>
from . import palettes
seaborn/palettes.py:7: in <module>
import husl
E ModuleNotFoundError: No module named 'husl'
________________ ERROR collecting tests/_stats/test_density.py _________________
ImportError while importing test module '/home/tkloczko/rpmbuild/BUILD/seaborn-0.13.1/tests/_stats/test_density.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib64/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/_stats/test_density.py:7: in <module>
from seaborn._core.groupby import GroupBy
seaborn/__init__.py:2: in <module>
from .rcmod import * # noqa: F401,F403
seaborn/rcmod.py:5: in <module>
from . import palettes
seaborn/palettes.py:7: in <module>
import husl
E ModuleNotFoundError: No module named 'husl'
_________________ ERROR collecting tests/_stats/test_order.py __________________
ImportError while importing test module '/home/tkloczko/rpmbuild/BUILD/seaborn-0.13.1/tests/_stats/test_order.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib64/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/_stats/test_order.py:8: in <module>
from seaborn._core.groupby import GroupBy
seaborn/__init__.py:2: in <module>
from .rcmod import * # noqa: F401,F403
seaborn/rcmod.py:5: in <module>
from . import palettes
seaborn/palettes.py:7: in <module>
import husl
E ModuleNotFoundError: No module named 'husl'
_______________ ERROR collecting tests/_stats/test_regression.py _______________
ImportError while importing test module '/home/tkloczko/rpmbuild/BUILD/seaborn-0.13.1/tests/_stats/test_regression.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib64/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/_stats/test_regression.py:9: in <module>
from seaborn._core.groupby import GroupBy
seaborn/__init__.py:2: in <module>
from .rcmod import * # noqa: F401,F403
seaborn/rcmod.py:5: in <module>
from . import palettes
seaborn/palettes.py:7: in <module>
import husl
E ModuleNotFoundError: No module named 'husl'
=========================== short test summary info ============================
ERROR tests/test_algorithms.py
ERROR tests/test_axisgrid.py
ERROR tests/test_base.py
ERROR tests/test_categorical.py
ERROR tests/test_distributions.py
ERROR tests/test_docstrings.py
ERROR tests/test_matrix.py
ERROR tests/test_miscplot.py
ERROR tests/test_objects.py
ERROR tests/test_palettes.py
ERROR tests/test_rcmod.py
ERROR tests/test_regression.py
ERROR tests/test_relational.py
ERROR tests/test_statistics.py
ERROR tests/test_utils.py
ERROR tests/_core/test_data.py
ERROR tests/_core/test_groupby.py
ERROR tests/_core/test_moves.py
ERROR tests/_core/test_plot.py
ERROR tests/_core/test_properties.py
ERROR tests/_core/test_rules.py
ERROR tests/_core/test_scales.py
ERROR tests/_core/test_subplots.py
ERROR tests/_marks/test_area.py
ERROR tests/_marks/test_bar.py
ERROR tests/_marks/test_base.py
ERROR tests/_marks/test_dot.py
ERROR tests/_marks/test_line.py
ERROR tests/_marks/test_text.py
ERROR tests/_stats/test_aggregation.py
ERROR tests/_stats/test_counting.py
ERROR tests/_stats/test_density.py
ERROR tests/_stats/test_order.py
ERROR tests/_stats/test_regression.py
!!!!!!!!!!!!!!!!!!! Interrupted: 34 errors during collection !!!!!!!!!!!!!!!!!!!
============================== 34 errors in 5.02s ==============================
```
</details>
Issue is that this module is not maintained since 2015 https://pypi.org/project/husl/#history
|
closed
|
2024-01-01T10:39:07Z
|
2024-01-01T13:11:23Z
|
https://github.com/mwaskom/seaborn/issues/3608
|
[] |
kloczek
| 1
|
donnemartin/data-science-ipython-notebooks
|
deep-learning
| 16
|
Add SAWS: A Supercharged AWS Command Line Interface (CLI) to AWS Section.
|
closed
|
2015-10-04T10:40:50Z
|
2016-05-18T02:09:55Z
|
https://github.com/donnemartin/data-science-ipython-notebooks/issues/16
|
[
"feature-request"
] |
donnemartin
| 1
|
|
deeppavlov/DeepPavlov
|
nlp
| 1,693
|
Github Security Lab Vulnerability Contact
|
Greetings DeepPavlov maintainers,
Github has found a potentail vulnerability in DeepPavlov. Please let us know of a point of contact so that we can discuss this privately. We have the [Private Vulnerability Reporting](https://docs.github.com/en/code-security/security-advisories/working-with-repository-security-advisories/configuring-private-vulnerability-reporting-for-a-repository) feature if you do not have an established point of contact.
Thanks,
Kevin
|
open
|
2024-08-20T00:54:31Z
|
2024-08-20T00:54:31Z
|
https://github.com/deeppavlov/DeepPavlov/issues/1693
|
[
"bug"
] |
Kwstubbs
| 0
|
serengil/deepface
|
machine-learning
| 845
|
Can't detect faces when calling represent trough API and Facenet detector
|
Here's the image being used:
https://cdn.alboompro.com/5dc181967fac6b000152f3ae_650b3f428274300001d1815a/medium/whatsapp_image_2023-09-18_at_16-41-06.jpeg?v=1
I'm sending it to an API that running the dockerized image present in the repository on an EKS cluster (no GPU). When I send a request to the represent endpoint with the given image as the img parameter and using Facenet as a face decetor (tested with VGG, same error) it gives me the following error:
```
20/Sep/2023:18:47:46 +0000] "POST /represent HTTP/1.1" 200 43 "-" "Insomnia/2023.5.6"
[2023-09-20 18:48:06,321] ERROR in app: Exception on /represent [POST]
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 2190, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1486, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1484, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1469, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/app/routes.py", line 28, in represent
obj = service.represent(
File "/app/service.py", line 6, in represent
embedding_objs = DeepFace.represent(
File "/app/deepface/DeepFace.py", line 663, in represent
img_objs = functions.extract_faces(
File "/app/deepface/commons/functions.py", line 163, in extract_faces
raise ValueError(
ValueError: Face could not be detected. Please confirm that the picture is a face photo or consider to set enforce_detection param to False.
```
Any leads on how to improve face detection? Could it be a hardware limitation problem (like if can't detect face because lack of memory allocation or something). Not sure if disable enforce_detection is a option here.
|
closed
|
2023-09-20T18:57:45Z
|
2024-01-11T03:09:50Z
|
https://github.com/serengil/deepface/issues/845
|
[] |
elisoncampos
| 6
|
yinkaisheng/Python-UIAutomation-for-Windows
|
automation
| 198
|
如何快速的查找元素?
|
作者你好,我是刚刚接触这个库,觉得非常好用;我想问一下,如何快速的查找控件。
我通过光标获取到了元素控件,然后我把元素控件里面的Name,automationId, ClassName, ControlType这几个属性存储在了本地文件。我想实现快速的根据存储信息查找元素。我目前是通过这种方式找的,但是感觉不是很快,下面是部分代码截图:
tips: 是通过读取本地存储的元素信息进行查找的。
with open(self._locators_path, 'r') as f:
locators_data = json.loads(f.read())[name]
control = auto.GetRootControl()
def compare(locator):
def find(child, depth):
if child.Name == locator['Name'] and child.ClassName == locator['ClassName'] and child.ControlTypeName == locator['ControlType'] and child.AutomationId == locator['AutomationId']:
print(child.Name)
return True
else:
return False
return find
l = len(locators_data)
for index in range(l):
if index == l - 1:
control = auto.FindControl(control, compare(locators_data[index]), depth)
if control:
return WindowsElement(control)
else:
control = auto.FindControl(control, compare(locators_data[index]), 1)
return None
|
open
|
2022-03-18T12:10:18Z
|
2023-06-23T10:05:42Z
|
https://github.com/yinkaisheng/Python-UIAutomation-for-Windows/issues/198
|
[] |
shidecheng
| 2
|
tensorpack/tensorpack
|
tensorflow
| 788
|
FullyConnected output
|
An issue has to be one of the following:
1. Unexpected Problems / Potential Bugs
For any unexpected problems, __PLEASE ALWAYS INCLUDE__:
1. What you did:
```
xw = FullyConnected('FC',
inputs,
W_init=tf.random_uniform_initializer(0.0, 1.0),
b_init=tf.constant_initializer(0.0),
**kwargs)
w = xw.variables.W
```
```
AttributeError: 'Tensor' object has no attribute 'variables'
```
2. Have you made any changes to code? Paste them if any:
```
inputs = symbf.batch_flatten(inputs)
with rename_get_variable({'kernel': 'W', 'bias': 'b'}):
layer = tf.layers.Dense(
units=units,
activation=activation,
use_bias=use_bias,
kernel_initializer=kernel_initializer,
bias_initializer=bias_initializer,
kernel_regularizer=kernel_regularizer,
bias_regularizer=bias_regularizer,
activity_regularizer=activity_regularizer)
ret = layer.apply(inputs, scope=tf.get_variable_scope())
ret.variables = VariableHolder(W=layer.kernel)
if use_bias:
ret.variables.b = layer.bias
# return tf.identity(ret, name='output')
return ret
```
|
closed
|
2018-06-09T06:58:12Z
|
2018-06-09T19:09:47Z
|
https://github.com/tensorpack/tensorpack/issues/788
|
[
"duplicate"
] |
kamwoh
| 1
|
tensorflow/datasets
|
numpy
| 5,390
|
[data request] <poker>
|
* Name of dataset: <name>
* URL of dataset: <url>
* License of dataset: <license type>
* Short description of dataset and use case(s): <description>
Folks who would also like to see this dataset in `tensorflow/datasets`, please thumbs-up so the developers can know which requests to prioritize.
And if you'd like to contribute the dataset (thank you!), see our [guide to adding a dataset](https://github.com/tensorflow/datasets/blob/master/docs/add_dataset.md).
|
closed
|
2024-04-26T12:43:40Z
|
2024-04-26T12:44:30Z
|
https://github.com/tensorflow/datasets/issues/5390
|
[
"dataset request"
] |
Loicgrd
| 0
|
pyppeteer/pyppeteer
|
automation
| 416
|
the different with pyppeteer and puppeteer-case by case
|
url:https://open.jd.com/
Using puppeteer to request the link response page content is richer than pyppeteer. The page content produced by pyppeteer is same with the page source code, but puppeteer produced the page content is after rendering.
you can try!
version:
chromium-browser-Chromium 90.0.4430.212 Fedora Project
pyppeteer-1.0.2
node-v14.13.1
puppeteer-5.5.0
pyppeteer response content length 19782
puppeteer response content length 35414
|
closed
|
2022-09-27T04:10:52Z
|
2022-09-27T09:05:33Z
|
https://github.com/pyppeteer/pyppeteer/issues/416
|
[] |
runningabcd
| 0
|
numba/numba
|
numpy
| 9,278
|
Searching `%PREFIX%\Library` for new CTK Windows packages
|
In PR ( https://github.com/numba/numba/pull/7255 ) logic was added to Numba to handle the new-style CTK Conda packages on the `nvidia` channel (and now also on `conda-forge`)
In particular this added a series of `get_nvidia...` functions to search for libraries as part of `numba/cuda /cuda_paths.py`. For example
https://github.com/numba/numba/blob/a4664180ddc91e4c8a37dd06324f4e229a76df91/numba/cuda/cuda_paths.py#L82
<hr>
While it does look like these have some logic for searching for Windows libraries (like looking in `bin` instead of `lib`). It appears they start with `%PREFIX%` instead of `%PREFIX%\Library` (where Conda packages typically [put content on Windows]( https://docs.conda.io/projects/conda-build/en/latest/user-guide/environment-variables.html#environment-variables-set-during-the-build-process ))
Admittedly the `nvidia` package do use `%PREFIX%` to structure content. Though most Conda packages (include the CTK packages in `conda-forge`) do not do this using `%PREFIX%\Library` instead. In the future `nvidia` packages will match this behavior.
Given this wonder if we could search `%PREFIX%\Library` for the CTK first and then search `%PREFIX%` second. This would handle both structures seamlessly
|
closed
|
2023-11-11T03:23:01Z
|
2023-12-11T16:10:20Z
|
https://github.com/numba/numba/issues/9278
|
[
"needtriage",
"CUDA",
"bug - build/packaging"
] |
jakirkham
| 4
|
ijl/orjson
|
numpy
| 139
|
Regression in parsing JSON between 3.4.1 and 3.4.2
|
```Python
import orjson
s = '{"cf_status_firefox67": "---", "cf_status_firefox57": "verified"}'
orjson.loads(s)["cf_status_firefox57"]
```
This works in 3.4.1, throws `KeyError: 'cf_status_firefox57'` in 3.4.2.
|
closed
|
2020-10-30T13:55:18Z
|
2020-10-30T14:11:03Z
|
https://github.com/ijl/orjson/issues/139
|
[] |
marco-c
| 2
|
schemathesis/schemathesis
|
graphql
| 2,504
|
Specifying --hypothesis-seed=# does not recreate tests with the same data
|
### Checklist
- [x] I checked the [FAQ section](https://schemathesis.readthedocs.io/en/stable/faq.html#frequently-asked-questions) of the documentation
- [x] I looked for similar issues in the [issue tracker](https://github.com/schemathesis/schemathesis/issues)
- [x] I am using the latest version of Schemathesis
### Describe the bug
When running the CLI SchemaThesis tests, we cannot reproduce errors due to the changing data sent to the API. When specifying the seed, the initial several tests do use the same data, but eventually diverges into completely different data sets unique to each run.
### To Reproduce
1. Run this command twice:
st run --base-url=http://host.docker.internal:3050/api --checks=all --data-generation-method=all --target=all --max-failures=5000 --validate-schema=true --debug-output-file=/mnt/debug.json --show-trace --code-sample-style=curl --cassette-path /mnt/cassette.yaml --fixups=all --stateful=links --sanitize-output=true --contrib-unique-data --contrib-openapi-fill-missing-examples --hypothesis-max-examples=25 --hypothesis-phases=generate,explicit --hypothesis-verbosity=verbose --experimental=openapi-3.1 --generation-allow-x00=false --schemathesis-io-telemetry=false --hypothesis-seed=115111099930844871414873197742186425200 -H Authorization: Bearer TOKEN /mnt/api-specification.json'
2. Cassette file is different lengths, different number of tests were run, different data was sent to the API
Please include a minimal API schema causing this issue:
```yaml
{
"openapi": "3.0.1",
"info": {
"title": "System APIs",
"description": "DESCRIPTION REDACTED",
"version": "1.00.00"
},
"servers": [
{
"url": "/api"
}
],
"paths": {
....
```
### Expected behavior
The documentation for the run seed is sparse, but it indicates that the seed should be used to reproduce test results. We are not seeing this behavior.
### Environment
```
- OS: [Windows in Docker]
- Python version: [Using schemathesis/schemathesis:stable]
- Schemathesis version: [Using schemathesis/schemathesis:stable]
- Spec version: [e.g. Open API 3.0.1]
```
### Additional context
Where the two test runs begin to diverge, one test run will send URLs like:
uri: '[http://host.docker.internal:3050/api/PATHREDACTED?0=False&0=False&=true&%C2%94%F1%B3%9C%B3%C2%BC=0']
Whereas the equivalent other test run at the same line in the cassette file will be:
uri: '[http://host.docker.internal:3050/api/PATHREDACTED?ID=0']
|
open
|
2024-10-09T20:26:42Z
|
2024-10-20T01:56:38Z
|
https://github.com/schemathesis/schemathesis/issues/2504
|
[
"Type: Bug",
"Status: Needs Triage"
] |
hydroculator
| 2
|
ansible/awx
|
automation
| 15,071
|
Job output not refreshing automatically in AWX 24.1.0 wit operator 2.14.0
|
### Please confirm the following
- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.
- [X] I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `security@ansible.com` instead.)
### Bug Summary
HI,
the bug is identical like the one described here: [https://github.com/ansible/awx/issues/9747](https://github.com/ansible/awx/issues/9747)
- I did all the checks. So I removed completelly the older version and installed everything anew from the scratch. No luck.
- I did not even especially installed the NGINX as ingress with hostNetwork: true what I usually do at home lab. - also does not work.
- I tested it with hostNetwork: true. Does not work.
- I tested it without NGINX ingress. Does not work.
- I tested it with NGINX ingress. Does not work.
- I also tested it with external DNS by modifying the coreDNS. Does not work.
- I tested it also with NGINX Proxy Manager. Does not work.
No above solution works. I mean this functionality just does not work and it seems it is related to websocket.
Operator version 2.14.0
AWX version 24.1.0
Database clean installation: PostgreSQL 15
k3s version: v1.28.8+k3s1
OS: Debian 12 with kernel: 6.1.0-18-amd64
I see identical errors that the websocket is not working as expected like it is described in the URL I provided.
The additional error:
The error Uncaught DOMException: Failed to execute 'send' on 'WebSocket': Still in CONNECTING state
indicates that the code is trying to send a message through the WebSocket before the connection has been fully established. This happens because the send method is called immediately after creating the WebSocket, but before the connection has completed the handshake with the server.
To resolve this issue, you should ensure that the send operation in the ws.onopen callback occurs only after the WebSocket connection is fully open. The onopen event is designed to handle this scenario, as it's triggered when the WebSocket connection has been established successfully.
Here's how you can modify the code to ensure the send method is called at the right time:
```
let ws;
export default function connectJobSocket({ type, id }, onMessage) {
ws = new WebSocket(
`${window.location.protocol === 'http:' ? 'ws:' : 'wss:'}//${
window.location.host
}${window.location.pathname}websocket/`
);
ws.onopen = () => {
console.debug('WebSocket connection established.');
const xrftoken = `; ${document.cookie}`
.split('; csrftoken=')
.pop()
.split(';')
.shift();
const eventGroup = `${type}_events`;
// Ensure the connection is open before sending data
if (ws.readyState === WebSocket.OPEN) {
ws.send(
JSON.stringify({
xrftoken,
groups: { jobs: ['summary', 'status_changed'], [eventGroup]: [id] },
})
);
}
};
ws.onmessage = (e) => {
onMessage(JSON.parse(e.data));
};
ws.onclose = (e) => {
console.debug('WebSocket closed:', e.code);
if (e.code !== 1000) {
console.debug('Reconnecting WebSocket...');
setTimeout(() => {
connectJobSocket({ type, id }, onMessage);
}, 1000);
}
};
ws.onerror = (err) => {
console.debug('WebSocket error:', err);
ws.close();
};
}
export function closeWebSocket() {
if (ws) {
ws.close();
}
}
```
Version AWX 23.9.0 with Operator 2.12.2 and PostgreSQL 13 is working without any issue. It does not matter do you use NGINX Proxy Manager, ingress or without ingress or do you modify the coreDNS it just works. So I decided to not upgrade to the newest AWX operator, AWX and PostgreSQL.
### AWX version
24.1.0
### Select the relevant components
- [X] UI
- [X] UI (tech preview)
- [ ] API
- [ ] Docs
- [ ] Collection
- [ ] CLI
- [ ] Other
### Installation method
kubernetes
### Modifications
yes
### Ansible version
2.14.3
### Operating system
Debian 12
### Web browser
Chrome
### Steps to reproduce
- install newest k3s using this command:
```bash
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable traefik,servicelb" K3S_KUBECONFIG_MODE="644" sh -
```
- Do not modify coreDNS. Do not install ingress.
- Install HELM
```bash
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
```
- Install the newest AWX version with newest AWX operator from the scratch using the below ansible playbook:
```yaml
---
- name: Install AWX
hosts: localhost
become: yes
vars:
awx_namespace: awx
project_directory: /var/lib/awx/projects
storage_size: 2Gi
tasks:
- name: Download Kustomize with curl
ansible.builtin.shell:
cmd: curl -s "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh" | bash
creates: /usr/local/bin/kustomize
- name: Move Kustomize to the /usr/local/bin directory
ansible.builtin.shell:
cmd: mv kustomize /usr/local/bin
args:
creates: /usr/local/bin/kustomize
- name: Ensure namespace {{ awx_namespace }} exists
ansible.builtin.shell:
cmd: kubectl create namespace {{ awx_namespace }} --dry-run=client -o yaml | kubectl apply -f -
- name: Generate AWX resource file
ansible.builtin.copy:
dest: "./awx.yaml"
content: |
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: awx
spec:
service_type: nodeport
nodeport_port: 30060
projects_persistence: true
projects_existing_claim: awx-projects-claim
- name: Fetch latest release tag of AWX Operator
ansible.builtin.shell:
cmd: curl -s https://api.github.com/repos/ansible/awx-operator/releases/latest | grep tag_name | cut -d '"' -f 4
register: release_tag
changed_when: false
- name: Generate PV and PVC resource files
ansible.builtin.copy:
dest: "{{ item.dest }}"
content: "{{ item.content }}"
loop:
- dest: "./pv.yml"
content: |
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: awx-projects-volume
spec:
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
capacity:
storage: {{ storage_size }}
storageClassName: awx-projects-volume
hostPath:
path: {{ project_directory }}
- dest: "./pvc.yml"
content: |
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: awx-projects-claim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: {{ storage_size }}
storageClassName: awx-projects-volume
- name: Create kustomization.yaml
ansible.builtin.copy:
dest: "./kustomization.yaml"
content: |
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- github.com/ansible/awx-operator/config/default?ref={{ release_tag.stdout }}
- pv.yml
- pvc.yml
- awx.yaml
images:
- name: quay.io/ansible/awx-operator
newTag: {{ release_tag.stdout }}
namespace: {{ awx_namespace }}
- name: Apply Kustomize configuration
ansible.builtin.shell:
cmd: kustomize build . | kubectl apply -f -
```
Perform a check using http://IP_ADDRESS:30060
### Expected results
Working standard output in jobs view.
### Actual results
standard output in jobs does not work and it does not scroll the results of running ansible playbook. Websocket returns the same error like it is presented in the URL I provided.
### Additional information
```
connectJobSocket.js:17 Uncaught DOMException: Failed to execute 'send' on 'WebSocket': Still in CONNECTING state.
at WebSocket.onopen (https://awx.sysadmin.homes/static/js/main.13ab8549.js:2:3449626)
onopen @ connectJobSocket.js:17
```
```
react-dom.production.min.js:54 Uncaught Error: Minified React error #188; visit https://reactjs.org/docs/error-decoder.html?invariant=188 for the full message or use the non-minified dev environment for full errors and additional helpful warnings.
at react-dom.production.min.js:54:67
at Ze (react-dom.production.min.js:55:217)
at t.findDOMNode (react-dom.production.min.js:295:212)
at t.value (CellMeasurer.js:102:33)
at CellMeasurer.js:48:41
```
See logs from awx-web pod
```bash
10.42.0.1 - - [05/Apr/2024:15:53:26 +0000] "GET /api/v2/jobs/39/job_events/?not__stdout=&order_by=counter&page=1&page_size=50 HTTP/1.1" 200 52 "https://awx.sysadmin.homes/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "10.10.0.114"
[pid: 21|app: 0|req: 204/1288] 10.42.0.1 () {70 vars in 1359 bytes} [Fri Apr 5 15:53:25 2024] GET /api/v2/jobs/39/job_events/?not__stdout=&order_by=counter&page=1&page_size=50 => generated 52 bytes in 376 msecs (HTTP/1.1 200) 15 headers in 600 bytes (1 switches on core 0)
127.0.0.1:54360 - - [05/Apr/2024:15:45:19] "WSCONNECTING /websocket/" - -
127.0.0.1:54360 - - [05/Apr/2024:15:45:19] "WSCONNECT /websocket/" - -
127.0.0.1:54308 - - [05/Apr/2024:15:45:28] "WSDISCONNECT /websocket/" - -
127.0.0.1:54360 - - [05/Apr/2024:15:45:28] "WSDISCONNECT /websocket/" - -
127.0.0.1:49622 - - [05/Apr/2024:15:45:28] "WSCONNECTING /websocket/" - -
127.0.0.1:49622 - - [05/Apr/2024:15:45:28] "WSCONNECT /websocket/" - -
127.0.0.1:49622 - - [05/Apr/2024:15:45:30] "WSDISCONNECT /websocket/" - -
127.0.0.1:49636 - - [05/Apr/2024:15:45:30] "WSCONNECTING /websocket/" - -
127.0.0.1:49636 - - [05/Apr/2024:15:45:30] "WSCONNECT /websocket/" - -
127.0.0.1:49638 - - [05/Apr/2024:15:45:31] "WSCONNECTING /websocket/" - -
127.0.0.1:49638 - - [05/Apr/2024:15:45:31] "WSDISCONNECT /websocket/" - -
127.0.0.1:49652 - - [05/Apr/2024:15:45:32] "WSCONNECTING /websocket/" - -
127.0.0.1:49652 - - [05/Apr/2024:15:45:32] "WSCONNECT /websocket/" - -
127.0.0.1:49664 - - [05/Apr/2024:15:45:32] "WSCONNECTING /websocket/" - -
127.0.0.1:49664 - - [05/Apr/2024:15:45:32] "WSCONNECT /websocket/" - -
127.0.0.1:49664 - - [05/Apr/2024:15:45:36] "WSDISCONNECT /websocket/" - -
127.0.0.1:49202 - - [05/Apr/2024:15:45:37] "WSCONNECTING /websocket/" - -
127.0.0.1:49202 - - [05/Apr/2024:15:45:37] "WSCONNECT /websocket/" - -
127.0.0.1:49208 - - [05/Apr/2024:15:45:37] "WSCONNECTING /websocket/" - -
127.0.0.1:49208 - - [05/Apr/2024:15:45:37] "WSCONNECT /websocket/" - -
127.0.0.1:49202 - - [05/Apr/2024:15:45:37] "WSDISCONNECT /websocket/" - -
127.0.0.1:49208 - - [05/Apr/2024:15:45:37] "WSDISCONNECT /websocket/" - -
127.0.0.1:49216 - - [05/Apr/2024:15:45:38] "WSCONNECTING /websocket/" - -
127.0.0.1:49216 - - [05/Apr/2024:15:45:38] "WSCONNECT /websocket/" - -
127.0.0.1:49232 - - [05/Apr/2024:15:45:38] "WSCONNECTING /websocket/" - -
127.0.0.1:49232 - - [05/Apr/2024:15:45:38] "WSCONNECT /websocket/" - -
127.0.0.1:49636 - - [05/Apr/2024:15:46:15] "WSDISCONNECT /websocket/" - -
127.0.0.1:49232 - - [05/Apr/2024:15:46:15] "WSDISCONNECT /websocket/" - -
127.0.0.1:57920 - - [05/Apr/2024:15:46:17] "WSCONNECTING /websocket/" - -
127.0.0.1:57920 - - [05/Apr/2024:15:46:17] "WSCONNECT /websocket/" - -
127.0.0.1:57920 - - [05/Apr/2024:15:46:20] "WSDISCONNECT /websocket/" - -
127.0.0.1:57934 - - [05/Apr/2024:15:46:20] "WSCONNECTING /websocket/" - -
127.0.0.1:57934 - - [05/Apr/2024:15:46:20] "WSCONNECT /websocket/" - -
127.0.0.1:57934 - - [05/Apr/2024:15:48:20] "WSDISCONNECT /websocket/" - -
127.0.0.1:39102 - - [05/Apr/2024:15:48:20] "WSCONNECTING /websocket/" - -
127.0.0.1:39102 - - [05/Apr/2024:15:48:20] "WSCONNECT /websocket/" - -
127.0.0.1:39102 - - [05/Apr/2024:15:48:48] "WSDISCONNECT /websocket/" - -
127.0.0.1:47586 - - [05/Apr/2024:15:48:48] "WSCONNECTING /websocket/" - -
127.0.0.1:47586 - - [05/Apr/2024:15:48:48] "WSCONNECT /websocket/" - -
127.0.0.1:47586 - - [05/Apr/2024:15:48:55] "WSDISCONNECT /websocket/" - -
127.0.0.1:47592 - - [05/Apr/2024:15:48:55] "WSCONNECTING /websocket/" - -
127.0.0.1:47592 - - [05/Apr/2024:15:48:55] "WSCONNECT /websocket/" - -
127.0.0.1:45654 - - [05/Apr/2024:15:48:56] "WSCONNECTING /websocket/" - -
127.0.0.1:45654 - - [05/Apr/2024:15:48:56] "WSCONNECT /websocket/" - -
127.0.0.1:45666 - - [05/Apr/2024:15:48:56] "WSCONNECTING /websocket/" - -
127.0.0.1:45654 - - [05/Apr/2024:15:48:56] "WSDISCONNECT /websocket/" - -
127.0.0.1:45666 - - [05/Apr/2024:15:48:56] "WSDISCONNECT /websocket/" - -
127.0.0.1:45672 - - [05/Apr/2024:15:48:57] "WSCONNECTING /websocket/" - -
127.0.0.1:45672 - - [05/Apr/2024:15:48:57] "WSCONNECT /websocket/" - -
127.0.0.1:45686 - - [05/Apr/2024:15:48:57] "WSCONNECTING /websocket/" - -
127.0.0.1:45686 - - [05/Apr/2024:15:48:57] "WSCONNECT /websocket/" - -
127.0.0.1:47592 - - [05/Apr/2024:15:51:04] "WSDISCONNECT /websocket/" - -
127.0.0.1:45686 - - [05/Apr/2024:15:51:04] "WSDISCONNECT /websocket/" - -
127.0.0.1:47042 - - [05/Apr/2024:15:51:05] "WSCONNECTING /websocket/" - -
127.0.0.1:47042 - - [05/Apr/2024:15:51:05] "WSCONNECT /websocket/" - -
127.0.0.1:47042 - - [05/Apr/2024:15:51:07] "WSDISCONNECT /websocket/" - -
127.0.0.1:36972 - - [05/Apr/2024:15:51:07] "WSCONNECTING /websocket/" - -
127.0.0.1:36972 - - [05/Apr/2024:15:51:07] "WSCONNECT /websocket/" - -
127.0.0.1:36972 - - [05/Apr/2024:15:51:16] "WSDISCONNECT /websocket/" - -
127.0.0.1:50854 - - [05/Apr/2024:15:51:16] "WSCONNECTING /websocket/" - -
127.0.0.1:50854 - - [05/Apr/2024:15:51:16] "WSCONNECT /websocket/" - -
127.0.0.1:50854 - - [05/Apr/2024:15:51:21] "WSDISCONNECT /websocket/" - -
127.0.0.1:50860 - - [05/Apr/2024:15:51:22] "WSCONNECTING /websocket/" - -
127.0.0.1:50860 - - [05/Apr/2024:15:51:22] "WSCONNECT /websocket/" - -
127.0.0.1:50862 - - [05/Apr/2024:15:51:23] "WSCONNECTING /websocket/" - -
127.0.0.1:50862 - - [05/Apr/2024:15:51:23] "WSCONNECT /websocket/" - -
127.0.0.1:50864 - - [05/Apr/2024:15:51:23] "WSCONNECTING /websocket/" - -
127.0.0.1:50864 - - [05/Apr/2024:15:51:23] "WSCONNECT /websocket/" - -
127.0.0.1:50862 - - [05/Apr/2024:15:51:23] "WSDISCONNECT /websocket/" - -
127.0.0.1:50864 - - [05/Apr/2024:15:51:23] "WSDISCONNECT /websocket/" - -
127.0.0.1:50872 - - [05/Apr/2024:15:51:24] "WSCONNECTING /websocket/" - -
127.0.0.1:50872 - - [05/Apr/2024:15:51:24] "WSCONNECT /websocket/" - -
127.0.0.1:50878 - - [05/Apr/2024:15:51:24] "WSCONNECTING /websocket/" - -
127.0.0.1:50878 - - [05/Apr/2024:15:51:24] "WSCONNECT /websocket/" - -
127.0.0.1:50860 - - [05/Apr/2024:15:51:35] "WSDISCONNECT /websocket/" - -
127.0.0.1:50878 - - [05/Apr/2024:15:51:35] "WSDISCONNECT /websocket/" - -
127.0.0.1:38166 - - [05/Apr/2024:15:51:35] "WSCONNECTING /websocket/" - -
127.0.0.1:38166 - - [05/Apr/2024:15:51:35] "WSCONNECT /websocket/" - -
127.0.0.1:38166 - - [05/Apr/2024:15:51:38] "WSDISCONNECT /websocket/" - -
127.0.0.1:52776 - - [05/Apr/2024:15:51:38] "WSCONNECTING /websocket/" - -
127.0.0.1:52776 - - [05/Apr/2024:15:51:38] "WSCONNECT /websocket/" - -
127.0.0.1:52776 - - [05/Apr/2024:15:52:45] "WSDISCONNECT /websocket/" - -
127.0.0.1:54772 - - [05/Apr/2024:15:52:45] "WSCONNECTING /websocket/" - -
127.0.0.1:54772 - - [05/Apr/2024:15:52:45] "WSCONNECT /websocket/" - -
127.0.0.1:54772 - - [05/Apr/2024:15:52:51] "WSDISCONNECT /websocket/" - -
127.0.0.1:43496 - - [05/Apr/2024:15:52:52] "WSCONNECTING /websocket/" - -
127.0.0.1:43496 - - [05/Apr/2024:15:52:52] "WSCONNECT /websocket/" - -
127.0.0.1:43502 - - [05/Apr/2024:15:52:53] "WSCONNECTING /websocket/" - -
127.0.0.1:43502 - - [05/Apr/2024:15:52:53] "WSCONNECT /websocket/" - -
127.0.0.1:43518 - - [05/Apr/2024:15:52:53] "WSCONNECTING /websocket/" - -
127.0.0.1:43518 - - [05/Apr/2024:15:52:53] "WSCONNECT /websocket/" - -
127.0.0.1:43502 - - [05/Apr/2024:15:52:53] "WSDISCONNECT /websocket/" - -
127.0.0.1:43518 - - [05/Apr/2024:15:52:53] "WSDISCONNECT /websocket/" - -
127.0.0.1:43522 - - [05/Apr/2024:15:52:54] "WSCONNECTING /websocket/" - -
127.0.0.1:43522 - - [05/Apr/2024:15:52:54] "WSCONNECT /websocket/" - -
127.0.0.1:43538 - - [05/Apr/2024:15:52:54] "WSCONNECTING /websocket/" - -
127.0.0.1:43538 - - [05/Apr/2024:15:52:54] "WSCONNECT /websocket/" - -
127.0.0.1:43538 - - [05/Apr/2024:15:53:19] "WSDISCONNECT /websocket/" - -
127.0.0.1:36982 - - [05/Apr/2024:15:53:21] "WSCONNECTING /websocket/" - -
127.0.0.1:36982 - - [05/Apr/2024:15:53:21] "WSCONNECT /websocket/" - -
127.0.0.1:36992 - - [05/Apr/2024:15:53:21] "WSCONNECTING /websocket/" - -
127.0.0.1:36992 - - [05/Apr/2024:15:53:21] "WSCONNECT /websocket/" - -
127.0.0.1:43496 - - [05/Apr/2024:15:53:22] "WSDISCONNECT /websocket/" - -
127.0.0.1:36992 - - [05/Apr/2024:15:53:22] "WSDISCONNECT /websocket/" - -
127.0.0.1:37002 - - [05/Apr/2024:15:53:22] "WSCONNECTING /websocket/" - -
127.0.0.1:37002 - - [05/Apr/2024:15:53:22] "WSCONNECT /websocket/" - -
127.0.0.1:37002 - - [05/Apr/2024:15:53:24] "WSDISCONNECT /websocket/" - -
127.0.0.1:37008 - - [05/Apr/2024:15:53:24] "WSCONNECTING /websocket/" - -
127.0.0.1:37008 - - [05/Apr/2024:15:53:24] "WSCONNECT /websocket/" - -
127.0.0.1:37022 - - [05/Apr/2024:15:53:25] "WSCONNECTING /websocket/" - -
127.0.0.1:37022 - - [05/Apr/2024:15:53:25] "WSCONNECT /websocket/" - -
127.0.0.1:37022 - - [05/Apr/2024:15:53:25] "WSDISCONNECT /websocket/" - -
10.42.0.1 - - [05/Apr/2024:15:53:29 +0000] "GET /websocket/ HTTP/1.1" 101 207 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "10.10.0.114"
10.42.0.1 - - [05/Apr/2024:15:53:30 +0000] "GET /websocket/ HTTP/1.1" 101 29 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "10.10.0.114"
10.42.0.1 - - [05/Apr/2024:15:53:30 +0000] "GET /websocket/ HTTP/1.1" 101 29 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "10.10.0.114"
10.42.0.1 - - [05/Apr/2024:15:53:31 +0000] "GET /api/v2/jobs/39/job_events/?not__stdout=&order_by=counter&page=1&page_size=50 HTTP/1.1" 200 4049 "https://awx.sysadmin.homes/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "10.10.0.114"
[pid: 20|app: 0|req: 218/1289] 10.42.0.1 () {70 vars in 1359 bytes} [Fri Apr 5 15:53:30 2024] GET /api/v2/jobs/39/job_events/?not__stdout=&order_by=counter&page=1&page_size=50 => generated 4049 bytes in 731 msecs (HTTP/1.1 200) 15 headers in 602 bytes (1 switches on core 0)
```
logs from google chrome browser console in developer tools
```
Failed to load resource: the server responded with a status of 404 ()
/api/v2/workflow_jobs/62/workflow_nodes/?page=1&page_size=200:1
Failed to load resource: the server responded with a status of 404 ()
/api/v2/workflow_jobs/62/workflow_nodes/?page=1&page_size=200:1
Failed to load resource: the server responded with a status of 404 ()
requestCommon.ts:20
GET https://awx.sysadmin.homes/api/v2/workflow_jobs/62/workflow_nodes/?page=1&page_size=200 404 (Not Found)
He @ requestCommon.ts:20
(anonimowa) @ useGet.tsx:55
t @ index.mjs:653
(anonimowa) @ index.mjs:671
(anonimowa) @ index.mjs:613
(anonimowa) @ index.mjs:245
(anonimowa) @ index.mjs:398
t.isPaused.A.revalidateOnFocus.A.revalidateOnReconnect.t.onErrorRetry.retryCount @ index.mjs:329
Pokaż jeszcze 6 ramek
Pokaż mniej
requestCommon.ts:20
GET https://awx.sysadmin.homes/api/v2/workflow_jobs/62/workflow_nodes/?page=1&page_size=200 404 (Not Found)
```
See logs from k3s cluster pod for awx-web
```bash
10.42.0.1 - - [05/Apr/2024:14:59:39 +0000] "GET /websocket/ HTTP/1.1" 404 3878 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "10.10.0.114"
10.42.0.1 - - [05/Apr/2024:14:59:50 +0000] "GET /websocket/ HTTP/1.1" 101 322 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "10.10.0.114"
10.42.0.1 - - [05/Apr/2024:14:59:50 +0000] "OPTIONS /api/v2/unified_jobs/ HTTP/1.1" 200 11658 "https://awx.sysadmin.homes/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "10.10.0.114"
[pid: 20|app: 0|req: 141/842] 10.42.0.1 () {72 vars in 1293 bytes} [Fri Apr 5 14:59:50 2024] OPTIONS /api/v2/unified_jobs/ => generated 11658 bytes in 194 msecs (HTTP/1.1 200) 14 headers in 580 bytes (1 switches on core 0)
10.42.0.1 - - [05/Apr/2024:14:59:50 +0000] "OPTIONS /api/v2/inventory_sources/ HTTP/1.1" 200 24146 "https://awx.sysadmin.homes/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "10.10.0.114"
[pid: 24|app: 0|req: 191/843] 10.42.0.1 () {72 vars in 1303 bytes} [Fri Apr 5 14:59:50 2024] OPTIONS /api/v2/inventory_sources/ => generated 24146 bytes in 410 msecs (HTTP/1.1 200) 14 headers in 586 bytes (1 switches on core 0)
10.42.0.1 - - [05/Apr/2024:14:59:51 +0000] "GET /api/v2/unified_jobs/?not__launch_type=sync&order_by=-finished&page=1&page_size=20 HTTP/1.1" 200 27264 "https://awx.sysadmin.homes/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "10.10.0.114"
[pid: 23|app: 0|req: 260/844] 10.42.0.1 () {70 vars in 1369 bytes} [Fri Apr 5 14:59:50 2024] GET /api/v2/unified_jobs/?not__launch_type=sync&order_by=-finished&page=1&page_size=20 => generated 27264 bytes in 585 msecs (HTTP/1.1 200) 14 headers in 580 bytes (1 switches on core 0)
10.42.0.1 - - [05/Apr/2024:14:59:56 +0000] "GET /api/v2/jobs/9/relaunch/ HTTP/1.1" 200 68 "https://awx.sysadmin.homes/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "10.10.0.114"
[pid: 23|app: 0|req: 261/845] 10.42.0.1 () {70 vars in 1254 bytes} [Fri Apr 5 14:59:56 2024] GET /api/v2/jobs/9/relaunch/ => generated 68 bytes in 341 msecs (HTTP/1.1 200) 14 headers in 583 bytes (1 switches on core 0)
2024-04-05 14:59:57,986 INFO [e7d81c60ac1a4b08af2363573ae59f3a] awx.analytics.job_lifecycle job-11 created
10.42.0.1 - - [05/Apr/2024:14:59:58 +0000] "POST /api/v2/jobs/9/relaunch/ HTTP/1.1" 201 2937 "https://awx.sysadmin.homes/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "10.10.0.114"
[pid: 23|app: 0|req: 262/846] 10.42.0.1 () {76 vars in 1374 bytes} [Fri Apr 5 14:59:56 2024] POST /api/v2/jobs/9/relaunch/ => generated 2937 bytes in 2210 msecs (HTTP/1.1 201) 15 headers in 618 bytes (1 switches on core 0)
10.42.0.1 - - [05/Apr/2024:14:59:58 +0000] "GET /websocket/ HTTP/1.1" 101 350 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "10.10.0.114"
10.42.0.1 - - [05/Apr/2024:14:59:59 +0000] "GET /api/v2/unified_jobs/?id__in=11¬__launch_type=sync&order_by=-finished&page=1&page_size=20 HTTP/1.1" 200 2884 "https://awx.sysadmin.homes/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "10.10.0.114"
[pid: 20|app: 0|req: 142/847] 10.42.0.1 () {70 vars in 1389 bytes} [Fri Apr 5 14:59:58 2024] GET /api/v2/unified_jobs/?id__in=11¬__launch_type=sync&order_by=-finished&page=1&page_size=20 => generated 2884 bytes in 577 msecs (HTTP/1.1 200) 14 headers in 579 bytes (1 switches on core 0)
10.42.0.1 - - [05/Apr/2024:14:59:59 +0000] "GET /api/v2/unified_jobs/?id=11 HTTP/1.1" 200 2884 "https://awx.sysadmin.homes/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "10.10.0.114"
[pid: 23|app: 0|req: 263/848] 10.42.0.1 () {70 vars in 1259 bytes} [Fri Apr 5 14:59:58 2024] GET /api/v2/unified_jobs/?id=11 => generated 2884 bytes in 688 msecs (HTTP/1.1 200) 14 headers in 579 bytes (1 switches on core 0)
10.42.0.1 - - [05/Apr/2024:14:59:59 +0000] "GET /api/v2/jobs/11/ HTTP/1.1" 200 3026 "https://awx.sysadmin.homes/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "10.10.0.114"
[pid: 23|app: 0|req: 264/849] 10.42.0.1 () {70 vars in 1238 bytes} [Fri Apr 5 14:59:59 2024] GET /api/v2/jobs/11/ => generated 3026 bytes in 331 msecs (HTTP/1.1 200) 14 headers in 587 bytes (1 switches on core 0)
10.42.0.1 - - [05/Apr/2024:15:00:00 +0000] "OPTIONS /api/v2/jobs/11/job_events/ HTTP/1.1" 200 12315 "https://awx.sysadmin.homes/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "10.10.0.114"
[pid: 23|app: 0|req: 265/850] 10.42.0.1 () {72 vars in 1305 bytes} [Fri Apr 5 14:59:59 2024] OPTIONS /api/v2/jobs/11/job_events/ => generated 12315 bytes in 356 msecs (HTTP/1.1 200) 15 headers in 603 bytes (1 switches on core 0)
[pid: 21|app: 0|req: 139/851] 10.42.0.1 () {70 vars in 1358 bytes} [Fri Apr 5 15:00:00 2024] GET /api/v2/jobs/11/job_events/?not__stdout=&order_by=counter&page=1&page_size=50 => generated 52 bytes in 351 msecs (HTTP/1.1 200) 15 headers in 600 bytes (1 switches on core 0)
10.42.0.1 - - [05/Apr/2024:15:00:00 +0000] "GET /api/v2/jobs/11/job_events/?not__stdout=&order_by=counter&page=1&page_size=50 HTTP/1.1" 200 52 "https://awx.sysadmin.homes/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "10.10.0.114"
[pid: 23|app: 0|req: 266/852] 10.42.0.1 () {72 vars in 1293 bytes} [Fri Apr 5 15:00:00 2024] OPTIONS /api/v2/unified_jobs/ => generated 11658 bytes in 393 msecs (HTTP/1.1 200) 14 headers in 580 bytes (1 switches on core 0)
10.42.0.1 - - [05/Apr/2024:15:00:00 +0000] "OPTIONS /api/v2/unified_jobs/ HTTP/1.1" 200 11658 "https://awx.sysadmin.homes/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "10.10.0.114"
10.42.0.1 - - [05/Apr/2024:15:00:00 +0000] "OPTIONS /api/v2/inventory_sources/ HTTP/1.1" 200 24146 "https://awx.sysadmin.homes/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "10.10.0.114"
[pid: 22|app: 0|req: 115/853] 10.42.0.1 () {72 vars in 1303 bytes} [Fri Apr 5 15:00:00 2024] OPTIONS /api/v2/inventory_sources/ => generated 24146 bytes in 646 msecs (HTTP/1.1 200) 14 headers in 586 bytes (1 switches on core 0)
10.42.0.1 - - [05/Apr/2024:15:00:01 +0000] "GET /api/v2/unified_jobs/?not__launch_type=sync&order_by=-finished&page=1&page_size=20 HTTP/1.1" 200 32260 "https://awx.sysadmin.homes/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "10.10.0.114"
[pid: 20|app: 0|req: 143/854] 10.42.0.1 () {70 vars in 1369 bytes} [Fri Apr 5 15:00:00 2024] GET /api/v2/unified_jobs/?not__launch_type=sync&order_by=-finished&page=1&page_size=20 => generated 32260 bytes in 1164 msecs (HTTP/1.1 200) 14 headers in 580 bytes (1 switches on core 0)
10.42.0.1 - - [05/Apr/2024:15:00:30 +0000] "GET /websocket/ HTTP/1.1" 101 251 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "10.10.0.114"
[pid: 23|app: 0|req: 267/855] 10.42.0.1 () {70 vars in 1260 bytes} [Fri Apr 5 15:00:30 2024] GET /api/v2/project_updates/12/ => generated 5601 bytes in 567 msecs (HTTP/1.1 200) 14 headers in 587 bytes (1 switches on core 0)
10.42.0.1 - - [05/Apr/2024:15:00:31 +0000] "GET /api/v2/project_updates/12/ HTTP/1.1" 200 5601 "https://awx.sysadmin.homes/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "10.10.0.114"
10.42.0.1 - - [05/Apr/2024:15:00:31 +0000] "OPTIONS /api/v2/project_updates/12/events/ HTTP/1.1" 200 12382 "https://awx.sysadmin.homes/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "10.10.0.114"
[pid: 20|app: 0|req: 144/856] 10.42.0.1 () {72 vars in 1319 bytes} [Fri Apr 5 15:00:31 2024] OPTIONS /api/v2/project_updates/12/events/ => generated 12382 bytes in 264 msecs (HTTP/1.1 200) 15 headers in 603 bytes (1 switches on core 0)
10.42.0.1 - - [05/Apr/2024:15:00:31 +0000] "GET /api/v2/project_updates/12/events/?not__stdout=&order_by=counter&page=1&page_size=50 HTTP/1.1" 200 32639 "https://awx.sysadmin.homes/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "10.10.0.114"
[pid: 22|app: 0|req: 116/857] 10.42.0.1 () {70 vars in 1373 bytes} [Fri Apr 5 15:00:31 2024] GET /api/v2/project_updates/12/events/?not__stdout=&order_by=counter&page=1&page_size=50 => generated 32639 bytes in 342 msecs (HTTP/1.1 200) 15 headers in 603 bytes (1 switches on core 0)
10.42.0.1 - - [05/Apr/2024:15:00:40 +0000] "GET /websocket/ HTTP/1.1" 101 195 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "10.10.0.114"
10.42.0.1 - - [05/Apr/2024:15:00:40 +0000] "OPTIONS /api/v2/unified_jobs/ HTTP/1.1" 200 11658 "https://awx.sysadmin.homes/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "10.10.0.114"
[pid: 21|app: 0|req: 140/858] 10.42.0.1 () {72 vars in 1293 bytes} [Fri Apr 5 15:00:40 2024] OPTIONS /api/v2/unified_jobs/ => generated 11658 bytes in 299 msecs (HTTP/1.1 200) 14 headers in 580 bytes (1 switches on core 0)
[pid: 22|app: 0|req: 117/859] 10.42.0.1 () {72 vars in 1303 bytes} [Fri Apr 5 15:00:40 2024] OPTIONS /api/v2/inventory_sources/ => generated 24146 bytes in 546 msecs (HTTP/1.1 200) 14 headers in 586 bytes (1 switches on core 0)
10.42.0.1 - - [05/Apr/2024:15:00:40 +0000] "OPTIONS /api/v2/inventory_sources/ HTTP/1.1" 200 24146 "https://awx.sysadmin.homes/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "10.10.0.114"
10.42.0.1 - - [05/Apr/2024:15:00:41 +0000] "GET /api/v2/unified_jobs/?not__launch_type=sync&order_by=-finished&page=1&page_size=20 HTTP/1.1" 200 32590 "https://awx.sysadmin.homes/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "10.10.0.114"
[pid: 23|app: 0|req: 268/860] 10.42.0.1 () {70 vars in 1368 bytes} [Fri Apr 5 15:00:40 2024] GET /api/v2/unified_jobs/?not__launch_type=sync&order_by=-finished&page=1&page_size=20 => generated 32590 bytes in 1479 msecs (HTTP/1.1 200) 14 headers in 580 bytes (1 switches on core 0)
10.42.0.1 - - [05/Apr/2024:15:00:42 +0000] "GET /websocket/ HTTP/1.1" 101 237 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "10.10.0.114"
10.42.0.1 - - [05/Apr/2024:15:00:43 +0000] "GET /api/v2/jobs/11/ HTTP/1.1" 200 6618 "https://awx.sysadmin.homes/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "10.10.0.114"
[pid: 24|app: 0|req: 192/861] 10.42.0.1 () {70 vars in 1238 bytes} [Fri Apr 5 15:00:42 2024] GET /api/v2/jobs/11/ => generated 6618 bytes in 732 msecs (HTTP/1.1 200) 14 headers in 587 bytes (1 switches on core 0)
10.42.0.1 - - [05/Apr/2024:15:00:43 +0000] "OPTIONS /api/v2/jobs/11/job_events/ HTTP/1.1" 200 12315 "https://awx.sysadmin.homes/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "10.10.0.114"
[pid: 23|app: 0|req: 269/862] 10.42.0.1 () {72 vars in 1305 bytes} [Fri Apr 5 15:00:43 2024] OPTIONS /api/v2/jobs/11/job_events/ => generated 12315 bytes in 232 msecs (HTTP/1.1 200) 15 headers in 603 bytes (1 switches on core 0)
[pid: 23|app: 0|req: 270/863] 10.42.0.1 () {70 vars in 1359 bytes} [Fri Apr 5 15:00:43 2024] GET /api/v2/jobs/11/job_events/?not__stdout=&order_by=counter&page=1&page_size=50 => generated 179025 bytes in 167 msecs (HTTP/1.1 200) 15 headers in 604 bytes (1 switches on core 0)
10.42.0.1 - - [05/Apr/2024:15:00:43 +0000] "GET /api/v2/jobs/11/job_events/?not__stdout=&order_by=counter&page=1&page_size=50 HTTP/1.1" 200 179025 "https://awx.sysadmin.homes/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "10.10.0.114"
10.42.0.1 - - [05/Apr/2024:15:01:28 +0000] "GET /websocket/ HTTP/1.1" 101 235 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "10.10.0.114"
10.42.0.1 - - [05/Apr/2024:15:01:30 +0000] "GET /websocket/ HTTP/1.1" 101 223 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "10.10.0.114"
10.42.0.1 - - [05/Apr/2024:15:01:30 +0000] "OPTIONS /api/v2/unified_jobs/ HTTP/1.1" 200 11658 "https://awx.sysadmin.homes/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "10.10.0.114"
[pid: 24|app: 0|req: 193/864] 10.42.0.1 () {72 vars in 1293 bytes} [Fri Apr 5 15:01:30 2024] OPTIONS /api/v2/unified_jobs/ => generated 11658 bytes in 408 msecs (HTTP/1.1 200) 14 headers in 580 bytes (1 switches on core 0)
10.42.0.1 - - [05/Apr/2024:15:01:30 +0000] "OPTIONS /api/v2/inventory_sources/ HTTP/1.1" 200 24146 "https://awx.sysadmin.homes/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "10.10.0.114"
[pid: 23|app: 0|req: 271/865] 10.42.0.1 () {72 vars in 1303 bytes} [Fri Apr 5 15:01:30 2024] OPTIONS /api/v2/inventory_sources/ => generated 24146 bytes in 569 msecs (HTTP/1.1 200) 14 headers in 586 bytes (1 switches on core 0)
10.42.0.1 - - [05/Apr/2024:15:01:31 +0000] "GET /api/v2/unified_jobs/?not__launch_type=sync&order_by=-finished&page=1&page_size=20 HTTP/1.1" 200 32615 "https://awx.sysadmin.homes/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "10.10.0.114"
[pid: 21|app: 0|req: 141/866] 10.42.0.1 () {70 vars in 1369 bytes} [Fri Apr 5 15:01:30 2024] GET /api/v2/unified_jobs/?not__launch_type=sync&order_by=-finished&page=1&page_size=20 => generated 32615 bytes in 775 msecs (HTTP/1.1 200) 14 headers in 580 bytes (1 switches on core 0)
```
The log snippets indicate interactions with various API endpoints and websocket connections under the /websocket/ path. Here's an analysis of the relevant parts and possible issues:
Websocket Connection Codes (101): The HTTP status code 101 in your logs suggests that websocket connections are being initiated and possibly established ("Switching Protocols"). However, there's an initial 404 error for the websocket endpoint, which could indicate a misconfiguration at some point in time.
Initial 404 Error: The first attempt to access /websocket/ resulted in a 404 (Not Found) error. This could imply that either the websocket service wasn't ready at that moment or there was a misconfiguration in the URL or routing.
Successful Connections: Subsequent attempts to connect to the websocket endpoint returned a 101 status, which means the server accepted the request to switch protocols from HTTP to WebSockets.
Other API Requests: The log shows successful HTTP requests (status 200) to various API endpoints, indicating that the regular HTTP API seems to be functioning well.
|
closed
|
2024-04-06T08:05:20Z
|
2024-05-29T15:03:23Z
|
https://github.com/ansible/awx/issues/15071
|
[
"type:bug",
"component:ui",
"needs_triage",
"community",
"component:ui_next"
] |
sysadmin-info
| 6
|
BayesWitnesses/m2cgen
|
scikit-learn
| 582
|
How to use model for jpg photo?
|
Hello. I converted a model into pure python, which I trained to determine the presence or absence of an object.
```
import os
import pickle
import sys
from skimage.io import imread
from skimage.transform import resize
import numpy as np
from sklearn import svm
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score
import m2cgen as m2c
sys.setrecursionlimit(2147483647)
input_dir = '0000/clf-data'
categories = ['empty', 'not_empty']
data = []
labels = []
for category_idx, category in enumerate(categories):
for file in os.listdir(os.path.join(input_dir, category)):
img_path = os.path.join(input_dir, category, file)
img = imread(img_path)
img = resize(img, (15, 15))
data.append(img.flatten())
labels.append(category_idx)
data = np.asarray(data)
labels = np.asarray(labels)
clf = svm.SVC()
x_train, x_test, y_train, y_test = train_test_split(data, labels, test_size=0.2, shuffle=True, stratify=labels)
clf.fit(x_train, y_train)
y_prediction = clf.predict(x_test)
score = accuracy_score(y_prediction, y_test)
print('{}% of samples were correctly classified'.format(str(score * 100)))
#pickle.dump(best_estimator, open('./model.p', 'wb'))
code = m2c.export_to_python(clf)
print(code)
nameimgs = "model.py"
fs = open(nameimgs,"w")
fs.write(code)
fs.close()`
```
Please tell me how to make a prediction on a small JPG photo?
I did not find on the net examples of working with a JPG image. Thank you.
|
open
|
2023-08-31T11:52:19Z
|
2023-08-31T11:52:19Z
|
https://github.com/BayesWitnesses/m2cgen/issues/582
|
[] |
Flashton91
| 0
|
roboflow/supervision
|
computer-vision
| 1,619
|
Bug found in ConfusionMatrix.from_detections
|
### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar bug report.
### Bug
Issue found in code when producing a confusion matrix for object detection. It seems like the FN was being added incorrectly to the matrix. Here is the code that was problematic for me. When removing the else condition, I was getting the correct TP value. It seems that num_classes, ends up being at the same position detection_classes[matched_detection_idx[j]]
```
```
for i, true_class_value in enumerate(true_classes):
j = matched_true_idx == i
print('sum(j)', sum(j))
if matches.shape[0] > 0 and sum(j) == 1:
result_matrix[
true_class_value, detection_classes[matched_detection_idx[j]]
] += 1 # TP
else:
result_matrix[true_class_value, num_classes] += 1 # FN
```
```
### Environment
_No response_
### Minimal Reproducible Example
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR!
|
open
|
2024-10-24T15:02:23Z
|
2025-01-28T22:37:40Z
|
https://github.com/roboflow/supervision/issues/1619
|
[
"bug"
] |
chiggins2024
| 3
|
AutoGPTQ/AutoGPTQ
|
nlp
| 292
|
[BUG] Exllama seems to crash without a group size
|
**Describe the bug**
Our entire KoboldAI software crashes when AutoGPTQ is trying to load a GPTQ model with act order true but no group size.
First Huggingface shows an error that many keys such as model.layers.12.self_attn.q_proj.g_idx are missing, then the program closes without the real error and our fallbacks do not trigger.
Right now this only happens when I either use my self compiled 0.4.1 wheel that contains the exllama kernels, as well as the official 0.4.2 wheel. If I use the 0.4.1 wheel without exllama kernels the key errors are still shown but the model works.
Model tested: https://huggingface.co/TheBloke/airochronos-33B-GPTQ
**Hardware details**
Nvidia 3090 (I also have an M40 but its hidden during this test with CUDA_VISIBLE_DEVICES) + AMD Ryzen 1700X
The same issue happens on an A6000 runpod docker instance.
**Software version**
KoboldAI Runtime defaults (Python 3.8, Pytorch 2.0.1, Transformers 4.32, Safetensors 0.3.3, Optimum)
**To Reproduce**
Use a model such as TheBloke/airochronos-33B-GPTQ that is in the GPTQ v1 format with no group size, the entire python process should crash during the loading phase.
**Expected behavior**
Key errors are / are not shown, the model loads with or without exllama and then generates fine.
|
closed
|
2023-08-27T19:12:36Z
|
2023-10-31T17:31:12Z
|
https://github.com/AutoGPTQ/AutoGPTQ/issues/292
|
[
"bug"
] |
henk717
| 4
|
deepspeedai/DeepSpeed
|
pytorch
| 5,656
|
[BUG] Running llama2-7b step3 with tensor parallel and HE fails due to incompatible shapes
|
Hi, I get an error when trying to run step3 of llama2-7b with tensor parallel. The error happens in merge_qkv:
` return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1569, in _call_impl
result = forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/transformers/models/llama/modeling_llama.py", line 1038, in forward
outputs = self.model(
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1519, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1569, in _call_impl
result = forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/transformers/models/llama/modeling_llama.py", line 925, in forward
layer_outputs = decoder_layer(
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1519, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1569, in _call_impl
result = forward_call(*args, **kwargs)
File "/software/users/snahir1/DeepSpeed/deepspeed/model_implementations/transformers/ds_transformer.py", line 171, in forward
self.attention(input,
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1519, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1528, in _call_impl
return forward_call(*args, **kwargs)
File "/software/users/snahir1/DeepSpeed/deepspeed/ops/transformer/inference/ds_attention.py", line 141, in forward
self._attn_qkvw, self._attn_qkvb = self._merge_qkv()
File "/software/users/snahir1/DeepSpeed/deepspeed/ops/transformer/inference/ds_attention.py", line 118, in _merge_qkv
qvkw[:self.hidden_size_per_partition, :] = self.attn_qw # type: ignore
RuntimeError: The expanded size of the tensor (4096) must match the existing size (0) at non-singleton dimension 1. Target sizes: [512, 4096]. Tensor sizes: [0]`
The target slice size is [512, 4096], and self.attn_qw size is 0. self.attn_qw is initialized in DeepSpeedSelfAttention as None when initializing the actor model.
maybe the issue is in HybridSplitQKVContainer and specifically in the implementation of set_q_k_v() ?
This specific logs are for batch size =1 and total batch size = 8. Originally I run the model with batch_size=4, total_batch_size=32 and it failed earlier, when trying to combine regular attention mask and casual attention mask, with:
`line 841, in _prepare_decoder_attention_mask
expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask`
in modeling_llama.py.
The regular attention mask is calculated in the data loader with dim[0] equal to the batch size. The causal mask is created later on in _make_causal_mask function in the transformers library, taking its shape from the "input_ids" argument.
input_ids tensor is created in the hybrid engine, which uses it as a destination tensor to the dist.all_gather_into_tensor operation (hence its size is the batch size * tp size). therefore dim[0] of casual mask = total_batch_size.
**To Reproduce**
add --enable_hybrid_engine and --inference_tp_size 8 to training_scripts/llama2/run_llama2_7b.sh
|
open
|
2024-06-13T07:00:06Z
|
2024-06-13T13:21:28Z
|
https://github.com/deepspeedai/DeepSpeed/issues/5656
|
[
"bug",
"deepspeed-chat"
] |
ShellyNR
| 0
|
lux-org/lux
|
jupyter
| 217
|
Improve automatic bin determination for histograms
|
Currently, the formula for histogram binning sometimes results in bins that are very "skinny" and sometimes bins that are very "wide". We need to improve histogram bin width and size determination to ensure more accurate histograms are plotted.
This is especially true for the "Filter" action.
Example:
```python
df = pd.read_csv("https://github.com/lux-org/lux-datasets/blob/master/data/olympic.csv?raw=True")
df.intent=["Height"]
df
```


This needs to be customized for matplotlib and Altair.
|
closed
|
2021-01-11T12:46:22Z
|
2021-03-03T22:27:56Z
|
https://github.com/lux-org/lux/issues/217
|
[
"enhancement",
"easy"
] |
dorisjlee
| 1
|
strawberry-graphql/strawberry
|
graphql
| 2,941
|
tests on windows broken?
|
Seemingly the hook tests on windows are broken.
First there are many deprecation warnings, second my code change works for all other tests.
PR on which "tests on windows" fail (there are also some other):
https://github.com/strawberry-graphql/strawberry/pull/2938
|
closed
|
2023-07-12T13:22:52Z
|
2025-03-20T15:56:17Z
|
https://github.com/strawberry-graphql/strawberry/issues/2941
|
[
"bug"
] |
devkral
| 2
|
microsoft/MMdnn
|
tensorflow
| 206
|
TensorFlow2IR
|
Platform (CentOS 6.6):
Python version:2.7.14
Source framework with version (like Tensorflow 1.6.1 with GPU):
Pre-trained model path (own):
Running scripts:
python -m mmdnn.conversion._script.convertToIR -f tensorflow -d resnet101 -n tests/resnet_v1_101.ckpt.meta --dstNodeName muli_predictions

The last second lay of the net must be "dropout"?
|
closed
|
2018-05-23T00:58:29Z
|
2018-07-05T05:01:05Z
|
https://github.com/microsoft/MMdnn/issues/206
|
[] |
rbzhu
| 4
|
plotly/dash
|
plotly
| 3,069
|
explicitly-set host name overridden by environment variables
|
I am running a dash app on a remote AlmaLinux machine. I connect to the remote machine from a Windows 10 computer via a ssh tunnel: "ssh -L 8050:localhost:8050 AlmaLinuxServerName. The AlmaLinux computer has restrictive rules preventing connections on all but a few ports coming in on its external IP interfaces but much less restrictive rules for connections to the localhost interface.
The dash app would start and run on AlmaLinux , but Windows 10 connecting over the SSH tunnel would get ERR_CONNECTION_RESET errors in the web browser and the terminal of the AlmaLinux would get a "channel 3: open failed: connect failed: Connection refused" every time the web browser tried to refresh.
Furthermore no matter what values I put into the host for the dash run() command, I would get:
netstat -ltun | grep 8050
tcp 0 0 AlmaLinuxURL:8050 0.0.0.0:* LISTEN
Where AlmaLinuxURL is the hostname in my AlmaLinux environment variable. Based on the network configuration, I need the result to be something like:
netstat -ltun | grep 8050
tcp 0 0 localhost:8050 0.0.0.0:* LISTEN
Note, other versions of the localhost name would work (127.0.0.1 etc)
- replace the result of `pip list | grep dash` below
```
dash 2.18.1
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
- if frontend related, tell us your Browser, Version and OS
Not relevant but tested with,
- OS: Windows 10, AlmaLinux
- Browser: Chrome,Firefox
**Describe the bug**
OS host environment variable overrides explicitly set host with an app.run_server(debug=True, port=8052,host="localhost") call. This is an issue on my compute host because the external IP interface is locked down to outside connections and will only allow connections to localhost via an ssh tunnel. The hostname in my environment variable points to the locked-down external IP interface. So, it doesn't matter what I put into the run_server call. Dash overwrites this with the locked-down interface, and the Flask app is unavailable.
After a lot of debugging, I was able to get my dash app working I added an
os.environ["HOST"] = "127.0.0.1" call to my code before initializing dash. This allowed me to set the flask host to the one I expected. This didnt solve the issue that I expected to get the host name that I set; but it did let the app run.
**Expected behavior**
When the host is explicitly set, it should override the default or environment variables. If it is considered desirable to change the host or port via environment variables instead of code, Dash should adopt environmental variables specific to Dash, for example, DASH_HOST and DASH_PORT.
|
closed
|
2024-11-09T20:06:24Z
|
2025-01-23T20:46:59Z
|
https://github.com/plotly/dash/issues/3069
|
[
"bug",
"P2"
] |
bberkeyU
| 1
|
nsidnev/fastapi-realworld-example-app
|
fastapi
| 1
|
Update uvicorn version
|
currently locked uvicorn version is ancient.
|
closed
|
2019-04-11T11:05:48Z
|
2019-04-12T10:01:45Z
|
https://github.com/nsidnev/fastapi-realworld-example-app/issues/1
|
[] |
Euphorbium
| 1
|
newpanjing/simpleui
|
django
| 311
|
自定义action涉及数据库更新时操作无效
|
**bug描述**
* *Bug description * *
简单的描述下遇到的bug:
Briefly describe the bugs encountered:
在admin.py新增一个action,实现点击一下按钮更新数据的功能,该action无效
**重现步骤**
** repeat step **
1.新增action
2.在界面能产生button,但是没有功能
3.在settings关闭simpleui,使用原生django,改功能没有问题


**环境**
** environment**
1.Operating System:Ubuntu18
(Windows/Linux/MacOS)....
2.Python Version:3.6.9
3.Django Version:3.0.8
4.SimpleUI Version:今年8月的时候在官网下载使用源码安装的,不知道版本号
**Description**
|
closed
|
2020-09-22T08:23:23Z
|
2020-09-26T06:53:47Z
|
https://github.com/newpanjing/simpleui/issues/311
|
[
"bug"
] |
qiuzige
| 1
|
onnx/onnx
|
machine-learning
| 5,880
|
Missing output shape when using infer_shapes_path on a model with external data?
|
### Discussed in https://github.com/onnx/onnx/discussions/5876
<div type='discussions-op-text'>
<sup>Originally posted by **daveliddell** January 26, 2024</sup>
Hi! I'm having trouble converting code that previously used `infer_shapes ` to use `infer_shapes_path` instead because the model I'm working with is too large for `infer_shapes`. However, on a small test model, it seems that now the output shape isn't getting set. I'm using onnx 1.15 with python 3.10 on ubuntu 22 or 20 (doesn't seem to matter).
Here's the model (with data externalized):
```
def create_model(model_file: Path, data_path: Path):
const = make_node('Constant', [], ['c_shape'], 'const',
value=numpy_helper.from_array(numpy.array([4], dtype=numpy.int64)))
cofshape = make_node('ConstantOfShape', ['c_shape'], ['c_out'], 'cofshape',
value=numpy_helper.from_array(numpy.array([1], dtype=numpy.int64)))
outval = make_tensor_value_info('c_out', TensorProto.INT64, [None])
graph = make_graph([const, cofshape], 'constgraph', [], [outval])
onnx_model = make_model(graph)
convert_model_to_external_data(onnx_model, all_tensors_to_one_file=True,
location=data_path, size_threshold=0, convert_attribute=True)
onnx.save(onnx_model, model_file)
```
Here's the code to load it and infer shapes:
```
onnx.shape_inference.infer_shapes_path(args.input_file, temp_file)
inferred_model = onnx.load(temp_file, load_external_data=False)
onnx.load_external_data_for_model(inferred_model, data_dir)
```
Here's a dump of the output of the resulting model:
```
output {
name: "c_out"
type {
tensor_type {
elem_type: 7
shape {
dim { <-- was expecting "dim_value: 4" here
}
}
}
}
}
```
Some experiments I've tried:
- infer_shapes_path + internal data --> PASS
- infer_shapes_path + external data --> FAIL
- infer_shapes + external data --> PASS
Here's how we were using infer_shapes before:
```
raw_model = onnx.load(file_path)
inferred_model = onnx.shape_inference.infer_shapes(raw_model)
```
</div>
|
closed
|
2024-01-30T01:46:09Z
|
2024-01-31T16:26:38Z
|
https://github.com/onnx/onnx/issues/5880
|
[] |
daveliddell
| 2
|
flairNLP/flair
|
pytorch
| 3,519
|
[Feature]: Allow sentences longer than the token limit for sequence tagger training
|
### Problem statement
Currently, we are not able to train `SequenceTagger` models with tagged `Sentence` objects exceeding the token limit (typically 512). It does seem there is some support for long sentences in embeddings via the `allow_long_sentences` option, but it does not appear that this applies to sequence tagging where the labels still need to be applied at the token level.
We have tried doing this, but if we don't limit the sentences to the token limit, we get an out of memory error. Not sure if this is a bug specifically, or just a lack of support for this feature.
### Solution
Not sure if there is a more ideal way, but one solution for training is to split a sentence into "chunks" that are of length 512 tokens or less, and applying the labels to these chunks. It is important to avoid splitting chunks across a labeled entity boundary.
### Additional Context
We have used this in training successfully, so I will be introducing our specific solution in a PR
|
open
|
2024-08-03T01:06:08Z
|
2024-08-23T13:14:02Z
|
https://github.com/flairNLP/flair/issues/3519
|
[
"feature"
] |
MattGPT-ai
| 7
|
jumpserver/jumpserver
|
django
| 14,908
|
[Feature] mysql ssl链接是否可以跳过服务端ca证书认证
|
### 产品版本
v4.6.0
### 版本类型
- [x] 社区版
- [ ] 企业版
- [ ] 企业试用版
### 安装方式
- [ ] 在线安装 (一键命令安装)
- [ ] 离线包安装
- [x] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] 源码安装
### ⭐️ 需求描述
目前版本配置mysql使用证书,则必须指定mysql服务端ca证书,是否可以支持 --ssl-mode=REQUIRED 这种ssl链接模式
### 解决方案
支持 --ssl-mode=REQUIRED 的mysql链接
### 补充信息
_No response_
|
closed
|
2025-02-20T11:06:32Z
|
2025-03-05T12:54:48Z
|
https://github.com/jumpserver/jumpserver/issues/14908
|
[
"⭐️ Feature Request"
] |
wppzxc
| 8
|
dask/dask
|
scikit-learn
| 11,494
|
`align_partitions` creates mismatched partitions.
|
**Describe the issue**:
The `divisions` attribute doesn't match between data frames even after applying `align_partitions` on them.
**Minimal Complete Verifiable Example**:
```python
import numpy as np
from distributed import Client, LocalCluster
from dask import dataframe as dd
from dask.dataframe.multi import align_partitions
def make_ltr(n_samples: int, n_features: int, max_rel: int):
rng = np.random.default_rng(1994)
X = rng.normal(0, 1.0, size=n_samples * n_features).reshape(n_samples, n_features)
y = np.sum(X, axis=1)
y -= y.min()
y = np.round(y / y.max() * max_rel).astype(np.int32)
return X, y
def main(client: Client) -> None:
X, y = make_ltr(n_samples=4096 * 4, n_features=16, max_rel=8)
dfx: dd.DataFrame = dd.from_array(X).repartition(npartitions=16)
dfy: dd.DataFrame = dd.from_dict({"y": y}, npartitions=16)
[dfx, dfy], _, _ = align_partitions(dfx, dfy)
print("dfx:", dfx.divisions, "\ndfy:", dfy.divisions)
if __name__ == "__main__":
with LocalCluster(n_workers=2) as cluster:
with Client(cluster) as client:
main(client)
```
For this particular example, there's an off-by-1 error in the resulting divisions.
```
dfx: (0, 1023, 2047, 3071, 4095, 5119, 6143, 7167, 8191, 9215, 10239, 11263, 12287, 13311, 14335, 15359, 16383)
dfy: (0, 1024, 2048, 3072, 4096, 5120, 6144, 7168, 8192, 9216, 10240, 11264, 12288, 13312, 14336, 15360, 16383)
```
We need multiple dfs to have the same partition scheme.
**Anything else we need to know?**:
**Environment**:
- Dask version: dask, version 2024.10.0
- Python version: Python 3.11.9
- Operating System: Ubuntu 24.04.1
- Install method (conda, pip, source): conda
|
closed
|
2024-11-05T12:26:04Z
|
2024-11-05T13:53:43Z
|
https://github.com/dask/dask/issues/11494
|
[
"needs triage"
] |
trivialfis
| 6
|
FactoryBoy/factory_boy
|
django
| 397
|
factory.Maybe doesn't work with factory.RelatedFactory
|
I'm trying to set a RelatedFactoy combine with Maybe, so that it's only created if a condition is fulfiled .
```
class PaymentModeFactory(factory.DjangoModelFactory):
class Meta:
model = models.PaymentMode
kind = enums.MANUAL
authorization = factory.Maybe(
factory.LazyAttribute(lambda o: o.kind = enums.WITH_AUTH),
factory.RelatedFactory(PaymentModeAuthorizationFactory, 'payment_mode'),
)
```
Whenever the Maybe condition is True (and only if True) the _create() method is given a authorization key in it's kwargs. The value is the RelatedFactory we defined.
|
closed
|
2017-07-31T15:51:11Z
|
2018-01-28T21:56:24Z
|
https://github.com/FactoryBoy/factory_boy/issues/397
|
[] |
tonial
| 3
|
open-mmlab/mmdetection
|
pytorch
| 11,730
|
i cannot handle this issue i need some help
|
File "C:\Users\mistletoe\.conda\envs\d2l\lib\site-packages\mmdet\models\dense_heads\anchor_head.py", line 284, in _get_targets_single
bbox_weights[pos_inds, :] = 1.0
RuntimeError: linearIndex.numel()*sliceSize*nElemBefore == expandedValue.numel() INTERNAL ASSERT FAILED at "C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\native\\cuda\\Indexing.cu":387, please report a bug to PyTorch. number of flattened indices did not match number of elements in the value tensor: 48 vs 12
Error in atexit._run_exitfuncs:
what the hell
|
open
|
2024-05-21T09:19:33Z
|
2025-02-27T14:40:14Z
|
https://github.com/open-mmlab/mmdetection/issues/11730
|
[] |
mistletoe111
| 2
|
ivy-llc/ivy
|
pytorch
| 28,485
|
Fix Frontend Failing Test: jax - tensor.paddle.Tensor.mean
|
To-do List: https://github.com/unifyai/ivy/issues/27496
|
open
|
2024-03-05T15:58:41Z
|
2024-03-09T20:51:36Z
|
https://github.com/ivy-llc/ivy/issues/28485
|
[
"Sub Task"
] |
ZJay07
| 0
|
open-mmlab/mmdetection
|
pytorch
| 11,535
|
Output object detection per class mAP using faster rcnn model, classwise has been modified to True,but doesn't work.
|
I tried to change the \mmdetection\mmdet\evaluation\metrics\coco_metric.py **classwise: bool = False** to **classwise: bool = True**, but it doesn't work. I have searched related issues but cannot get the expected help.
My system is windows10.Most likely it's because the code is running from source mmdet 3.1.0, which is why it's invalid for me to change classwise in the code. Is there any way to get the config file to load coco_metric.py from code?
my version:
mmdet 3.1.0
torch 1.13.1 + cu116
model:
faster-rcnn_r50_fpn
dataset:
NWPU
config file:
.\mmdetection\configs\faster-rcnn_NWPU2COCO\faster-rcnn_r50_fpn_1x_NWPU.py
```fp16 = dict(loss_scale=512.)
model = dict(
type='FasterRCNN',
data_preprocessor=dict(
# copy from _base_/model/faster-rcnn_r50-fpn
type='DetDataPreprocessor',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
bgr_to_rgb=True,
pad_size_divisor=32),
backbone=dict(
type='ResNet',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
norm_eval=True,
style='pytorch',
init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')),
neck=dict(
type='ATTFPN',
in_channels=[256, 512, 1024, 2048], # 输入的各个stage的通道数
out_channels=256, # 输出的特征层的通道数
num_outs=5), # 输出的特征层的数量
rpn_head=dict(
type='RPNHead',
in_channels=256,
feat_channels=256,
anchor_generator=dict(
type='AnchorGenerator',
scales=[8],
ratios=[0.5, 1.0, 2.0],
strides=[4, 8, 16, 32, 64]),
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[.0, .0, .0, .0], # 均值
target_stds=[1.0, 1.0, 1.0, 1.0]), # 方差
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), # 是否使用sigmoid来进行分类,如果False则使用softmax来分类
loss_bbox=dict(type='L1Loss', loss_weight=1.0)),
roi_head=dict(
type='StandardRoIHead', # RoIExtractor类型
bbox_roi_extractor=dict(
type='SingleRoIExtractor',
roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0), # ROI具体参数:ROI类型为ROIalign,输出尺寸为7,sample数为2
out_channels=256, # 输出通道数
featmap_strides=[4, 8, 16, 32]), # 特征图的步长
bbox_head=dict(
type='Shared2FCBBoxHead', # 全连接层类型
in_channels=256, # 全连接层数量
fc_out_channels=1024, # 输入通道数
roi_feat_size=7, # 输出通道数
# num_classes=80,修改为NWPU的10类
num_classes=10, # 分类器的类别数量
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0., 0., 0., 0.],
target_stds=[0.1, 0.1, 0.2, 0.2]),
reg_class_agnostic=False,
# 是否采用class_agnostic的方式来预测,class_agnostic表示输出bbox时只考虑其是否为前景,
# 后续分类的时候再根据该bbox在网络中的类别得分来分类,也就是说一个框可以对应多个类别
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type='L1Loss', loss_weight=1.0))),
# model training and testing settings
train_cfg=dict(
rpn=dict(
assigner=dict(
type='MaxIoUAssigner', # RPN网络的正负样本划分
pos_iou_thr=0.7, # 正样本的iou阈值
neg_iou_thr=0.3, # 负样本的iou阈值
min_pos_iou=0.3, # 正样本的iou最小值。
# 如果assign给ground truth的anchors中最大的IOU低于0.3,则忽略所有的anchors,否则保留最大IOU的anchor
match_low_quality=True,
ignore_iof_thr=-1), # 忽略bbox的阈值,当ground truth中包含需要忽略的bbox时使用,-1表示不忽略
sampler=dict(
type='RandomSampler', # 正负样本提取器类型
num=256, # 需提取的正负样本数量
pos_fraction=0.5, # 正样本比例
neg_pos_ub=-1, # 最大负样本比例,大于该比例的负样本忽略,-1表示不忽略
add_gt_as_proposals=False), # 把ground truth加入proposal作为正样本
allowed_border=-1, # 不允许在bbox周围外扩一定的像素, 0允许
pos_weight=-1, # 正样本权重,-1表示不改变原始的权重
debug=False),
rpn_proposal=dict(
nms_pre=2000, # 在nms之前保留的的得分最高的proposal数量
max_per_img=1000, # 在nms之后保留的的得分最高的proposal数量
nms=dict(type='soft_nms', iou_threshold=0.7), # nms阈值
min_bbox_size=0), # 最小bbox尺寸
rcnn=dict(
assigner=dict(
type='MaxIoUAssigner', # RCNN网络正负样本划分
pos_iou_thr=0.5, # 正样本的iou阈值
neg_iou_thr=0.5, # 负样本的iou阈值
min_pos_iou=0.5,
# 正样本的iou最小值。如果assign给ground truth的anchors中最大的IOU低于0.3,则忽略所有的anchors,否则保留最大IOU的anchor
match_low_quality=False,
ignore_iof_thr=-1), # 忽略bbox的阈值,当ground truth中包含需要忽略的bbox时使用,-1表示不忽略
sampler=dict(
type='RandomSampler', # 正负样本提取器类型
num=512, # 需提取的正负样本数量
pos_fraction=0.25, # 正样本比例
neg_pos_ub=-1, # 最大负样本比例,大于该比例的负样本忽略,-1表示不忽略
add_gt_as_proposals=True), # 把ground truth加入proposal作为正样本
pos_weight=-1, # 正样本权重,-1表示不改变原始的权重
debug=False)),
# val_cfg=dict(type='ValLoop'),
test_cfg=dict(
rpn=dict(
nms_pre=2000,
max_per_img=1000,
nms=dict(type='soft_nms', iou_threshold=0.7),
min_bbox_size=0),
rcnn=dict(
# score_thr=0.05,
score_thr=0.001,
nms=dict(type='soft_nms', iou_threshold=0.5),
max_per_img=100))
)
# dataset settings
dataset_type = 'NWPU2COCODataset' # 数据集类型
data_root = 'F:/paper_study/remote_sensing_image_processing/code/mmdetection/data/NWPU2COCO/'
backend_args = None
train_pipeline = [
dict(type='LoadImageFromFile', backend_args=backend_args),
dict(type='LoadAnnotations', with_bbox=True),
dict(type='Resize', scale=(1333, 800), keep_ratio=True),#这个调低点(200,200)
dict(type='RandomFlip', prob=0.5),
dict(type='PackDetInputs')
]
test_pipeline = [
dict(type='LoadImageFromFile', backend_args=backend_args),
dict(type='Resize', scale=(1333, 800), keep_ratio=True),
# If you don't have a gt annotation, delete the pipeline
dict(type='LoadAnnotations', with_bbox=True),
dict(
type='PackDetInputs',
meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
'scale_factor'))
]
# #修改
# metainfo = {
# 'classes': ('airplane', 'ship', 'storage tank', 'baseball diamond', 'tennis court',
# 'basketball court', 'ground track field', 'harbor', 'bridge', 'vehicle')
# # 'palette': [
# # (220, 20, 60),
# # ]
# }
train_dataloader = dict(
batch_size=1,
num_workers=2,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=True),
batch_sampler=dict(type='AspectRatioBatchSampler'),
dataset=dict(
type=dataset_type,
# metainfo=metainfo,
data_root=data_root,
ann_file=data_root + 'train.json',
data_prefix=dict(img=data_root + 'positive_image_set/'),
# img_prefix=data_root + 'positive_image_set/',
filter_cfg=dict(filter_empty_gt=True, min_size=32),
pipeline=train_pipeline,
backend_args=backend_args
))
val_dataloader = dict(
batch_size=1,
num_workers=2,
persistent_workers=True,
drop_last=False,
sampler=dict(type='DefaultSampler', shuffle=False),
dataset=dict(
type=dataset_type,
# metainfo=metainfo,
data_root=data_root,
ann_file=data_root+'test.json',
data_prefix=dict(img=data_root + 'positive_image_set/'),
# img_prefix=data_root+'positive_image_set/',
test_mode=True,
pipeline=test_pipeline,
backend_args=backend_args
))
test_dataloader = val_dataloader
val_evaluator = dict(
type='CocoMetric', # 用于评估检测和实例分割的 AR、AP 和 mAP 的 coco 评价指标
ann_file=data_root + '/test.json', # 标注文件路径
metric='bbox', # 需要计算的评价指标,`bbox` 用于检测,`segm` 用于实例分割
format_only=False,
backend_args=backend_args)
test_evaluator = val_evaluator
# training schedule for 1x
train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=24, val_interval=1)
val_cfg = dict(type='ValLoop')
test_cfg = dict(type='TestLoop')
# learning rate
param_scheduler = [
# dict(
# type='LinearLR', start_factor=0.001, by_epoch=False, begin=0, end=500),
dict(
type='MultiStepLR',
begin=0,
end=24,
by_epoch=True,
milestones=[16, 22],
gamma=0.1)
]
# optimizer
optim_wrapper = dict(
type='OptimWrapper',
optimizer=dict(type='SGD', lr=0.00125, momentum=0.9, weight_decay=0.0001)) #lr=0.02,8GPU
# Default setting for scaling LR automatically
# - `enable` means enable scaling LR automatically
# or not by default.
# - `base_batch_size` = (8 GPUs) x (2 samples per GPU).
auto_scale_lr = dict(enable=False, base_batch_size=1)
load_from = 'https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco' \
'/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth '
#default_runtime.py
default_scope = 'mmdet'
default_hooks = dict(
timer=dict(type='IterTimerHook'),
logger=dict(type='LoggerHook', interval=50),
param_scheduler=dict(type='ParamSchedulerHook'),
checkpoint=dict(type='CheckpointHook', interval=1),
sampler_seed=dict(type='DistSamplerSeedHook'),
visualization=dict(type='DetVisualizationHook'))
env_cfg = dict(
cudnn_benchmark=False,
mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),
dist_cfg=dict(backend='nccl'),
)
vis_backends = [dict(type='LocalVisBackend')]
visualizer = dict(
type='DetLocalVisualizer', vis_backends=vis_backends, name='visualizer')
log_processor = dict(type='LogProcessor', window_size=50, by_epoch=True)
log_level = 'INFO'
load_from = None
resume = False
```
My val_evaluator and test_evaluator use same metric: type = CocoMetric.

coco_metric.py modified:

I tried to debug as follows:
1. Because of the custom dataset, I define .\mmdetection\mmdet\datasets\NWPU2COCO.py,and modified classwise = True. It doesn't work, and exegesis this code, gets same unexcepted results.

2. I deleted F:\paper_study\remote_sensing_image_processing\code\mmdetection\mmdet\evaluation\metrics\__init__.py “from .coco_metric import CocoMetric”, run the "python tools/train.py configs/faster-rcnn_NWPU2COCO/faster-rcnn_r50_fpn_1x_NWPU.py",
but it still output results, it's werid!

3. I added some print to get classwise value. But didn't out classwise valuse.

**Environment**
sys.platform: win32
Python: 3.8.17 | packaged by conda-forge | (default, Jun 16 2023, 07:01:59) [MSC v.1929 64 bit (AMD64)]
CUDA available: True
numpy_random_seed: 2147483648
GPU 0: NVIDIA GeForce RTX 2070 SUPER
CUDA_HOME: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6
NVCC: Cuda compilation tools, release 11.6, V11.6.55
MSVC: 用于 x64 的 Microsoft (R) C/C++ 优化编译器 19.38.33133 版
GCC: n/a
PyTorch: 1.13.1+cu116
PyTorch compiling details: PyTorch built with:
- C++ Version: 199711
- MSVC 192829337
- Intel(R) Math Kernel Library Version 2020.0.2 Product Build 20200624 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v2.6.0 (Git Hash 52b5f107dd9cf10910aaa19cb47f3abf9b349815)
- OpenMP 2019
- LAPACK is enabled (usually provided by MKL)
- CPU capability usage: AVX2
- CUDA Runtime 11.6
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_7
5,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37
- CuDNN 8.3.2 (built against CUDA 11.5)
- Magma 2.5.4
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.6, CUDNN_VERSION=8.3.2, CXX_COMPILER=C:/actions-runner/_work/pytorch/pytorch/builder/windows/tmp_bin/sccache-cl.exe, CXX_FLAGS=/DWIN32 /D_WINDOWS /GR /EHsc /w /b
igobj -DUSE_PTHREADPOOL -openmp:experimental -IC:/actions-runner/_work/pytorch/pytorch/builder/windows/mkl/include -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILE
R_USE_KINETO, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.13.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NC
CL=OFF, USE_NNPACK=OFF, USE_OPENMP=ON, USE_ROCM=OFF,
TorchVision: 0.14.1+cu116
OpenCV: 4.8.0
MMEngine: 0.8.4
MMDetection: 3.1.0+f78af77
I'm honoured by your advice~!!!!
|
open
|
2024-03-09T07:05:53Z
|
2024-03-10T05:41:51Z
|
https://github.com/open-mmlab/mmdetection/issues/11535
|
[] |
LiuChar1esM
| 0
|
allenai/allennlp
|
pytorch
| 5,558
|
conda-forge support for mac M1 (osx-arm)
|
In #5258, @RoyLiberman [requested](https://github.com/allenai/allennlp/issues/5258#issuecomment-1014461862):
> can you please add support for mac M1 systems on conda-forge?
Since that issue is closed, it's exceedingly easy to overlook this, and so I thought I'd open a new issue.
The migration will be kicked off as soon as https://github.com/conda-forge/conda-forge-pinning-feedstock/pull/2485 is merged, and will then be observable under https://conda-forge.org/status/#armosxaddition. It might be a while until all dependencies get published, but I'll try to keep an eye on the PRs that the migrator opens (you can too! yes, you! you can ping me if you find something has stalled, which is always possible in a volunteer-only organisation).
|
closed
|
2022-02-05T07:10:41Z
|
2022-02-07T05:40:18Z
|
https://github.com/allenai/allennlp/issues/5558
|
[
"Feature request"
] |
h-vetinari
| 1
|
huggingface/datasets
|
pytorch
| 7,018
|
`load_dataset` fails to load dataset saved by `save_to_disk`
|
### Describe the bug
This code fails to load the dataset it just saved:
```python
from datasets import load_dataset
from transformers import AutoTokenizer
MODEL = "google-bert/bert-base-cased"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
dataset = load_dataset("yelp_review_full")
def tokenize_function(examples):
return tokenizer(examples["text"], padding="max_length", truncation=True)
tokenized_datasets = dataset.map(tokenize_function, batched=True)
tokenized_datasets.save_to_disk("dataset")
tokenized_datasets = load_dataset("dataset/") # raises
```
It raises `ValueError: Couldn't infer the same data file format for all splits. Got {NamedSplit('train'): ('arrow', {}), NamedSplit('test'): ('json', {})}`.
I believe this bug is caused by the [logic that tries to infer dataset format](https://github.com/huggingface/datasets/blob/9af8dd3de7626183a9a9ec8973cebc672d690400/src/datasets/load.py#L556). It counts the most common file extension. However, a small dataset can fit in a single `.arrow` file and have two JSON metadata files, causing the format to be inferred as JSON:
```shell
$ ls -l dataset/test
-rw-r--r-- 1 sliedes sliedes 191498784 Jul 1 13:55 data-00000-of-00001.arrow
-rw-r--r-- 1 sliedes sliedes 1730 Jul 1 13:55 dataset_info.json
-rw-r--r-- 1 sliedes sliedes 249 Jul 1 13:55 state.json
```
### Steps to reproduce the bug
Execute the code above.
### Expected behavior
The dataset is loaded successfully.
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-6.9.3-arch1-1-x86_64-with-glibc2.39
- Python version: 3.12.4
- `huggingface_hub` version: 0.23.4
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0
|
open
|
2024-07-01T12:19:19Z
|
2024-12-03T11:26:17Z
|
https://github.com/huggingface/datasets/issues/7018
|
[] |
sliedes
| 4
|
deepfakes/faceswap
|
machine-learning
| 547
|
intel phi card useful on this project?
|
Can I use intel phi card to make my training faster?Does this project support intel phi?
|
closed
|
2018-12-12T06:04:56Z
|
2018-12-12T11:47:03Z
|
https://github.com/deepfakes/faceswap/issues/547
|
[] |
bxjxxyy
| 1
|
RomelTorres/alpha_vantage
|
pandas
| 279
|
Uncorrect dates
|
Hello!
alpha_vantage library:

yfinance:

For example:
2020-07-31 | 102.885002 | 106.415001 | 100.824997 | 106.260002
102.885002 * 4 = 411.535
Why some of your data multiplied by 4?
And how i can select a range of intraday data?
Thank you
|
closed
|
2020-12-22T07:45:01Z
|
2020-12-22T15:13:33Z
|
https://github.com/RomelTorres/alpha_vantage/issues/279
|
[] |
Taram1980
| 1
|
AutoGPTQ/AutoGPTQ
|
nlp
| 9
|
bloom quantize problems
|
error info:
model.quantize([example])
File "/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/ansible/online/operator/udf_pod_git/chat-gpt/AutoGPTQ/auto_gptq/modeling/_base.py", line 189, in quantize
layer(layer_input, **additional_layer_inputs)[0][0].cpu()
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
TypeError: forward() missing 1 required positional argument: 'alibi'
bloom的位置编码是要特殊处理下吗?
|
closed
|
2023-04-23T14:27:45Z
|
2023-04-25T03:21:44Z
|
https://github.com/AutoGPTQ/AutoGPTQ/issues/9
|
[] |
hao-xyz
| 4
|
ymcui/Chinese-LLaMA-Alpaca
|
nlp
| 215
|
预测字符的速度与文档中宣称的不一致
|
### 问前必查项目
- [ ] 由于相关依赖频繁更新,请确保按照[Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki)中的相关步骤执行
- [ ] 我已阅读[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案
- [ ] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)、[LlamaChat](https://github.com/alexrozanski/LlamaChat)等,同时建议到对应的项目中查找解决方案
### 选择问题类型
基础模型:
- [ ] LLaMA
- [x] Alpaca
问题类型:
- [ ] 下载问题
- [ ] 模型转换和合并问题
- [ ] 模型推理问题(🤗 transformers)
- [ ] 模型量化和部署问题(llama.cpp、text-generation-webui、LlamaChat)
- [x] 效果问题
- [ ] 其他问题
### 详细描述问题
CPU是Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz
我使用llama.cpp使用q4_0参数量化了llama-7b和chinese_alpaca_lora-7b,在测试时的推理速度,开4线程,速度是299ms/token, 如下图

达不到这里所说的速度,请问时怎么回事


### 运行截图或log
$ ./main -m 7B-zh-models/7B/ggml-model-q4_0.bin --color -f prompts/alpaca.txt -ins -c 2048 --temp 0.2 -n 256 --repeat_penalty 1.3 -t 4
main: seed = 1682586033
llama.cpp: loading model from 7B-zh-models/7B/ggml-model-q4_0.bin
llama_model_load_internal: format = ggjt v1 (latest)
llama_model_load_internal: n_vocab = 49954
llama_model_load_internal: n_ctx = 2048
llama_model_load_internal: n_embd = 4096
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 32
llama_model_load_internal: n_layer = 32
llama_model_load_internal: n_rot = 128
llama_model_load_internal: ftype = 2 (mostly Q4_0)
llama_model_load_internal: n_ff = 11008
llama_model_load_internal: n_parts = 1
llama_model_load_internal: model size = 7B
llama_model_load_internal: ggml ctx size = 59.11 KB
llama_model_load_internal: mem required = 5896.99 MB (+ 1026.00 MB per state)
llama_init_from_file: kv self size = 1024.00 MB
system_info: n_threads = 4 / 56 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |
main: interactive mode on.
Reverse prompt: '### Instruction:
'
sampling: temp = 0.200000, top_k = 40, top_p = 0.950000, repeat_last_n = 64, repeat_penalty = 1.300000
generate: n_ctx = 2048, n_batch = 512, n_predict = 256, n_keep = 21
== Running in interactive mode. ==
- Press Ctrl+C to interject at any time.
- Press Return to return control to LLaMa.
- If you want to submit another line, end your input in '\'.
Below is an instruction that describes a task. Write a response that appropriately completes the request.
> 请介绍一下北京
北京是中国的首都,也是一个历史悠久、文化底蕴深厚的城市。它拥有许多著名的景点和历史建筑,如故宫博物院、天安门广场、颐和园等。此外,北京还是中国最大的城市之一,有着丰富的商业活动和现代化的发展趋势。
> ^C
llama_print_timings: load time = 8234.30 ms
llama_print_timings: sample time = 97.49 ms / 60 runs ( 1.62 ms per run)
llama_print_timings: prompt eval time = 12867.45 ms / 43 tokens ( 299.24 ms per token)
llama_print_timings: eval time = 23143.38 ms / 60 runs ( 385.72 ms per run)
llama_print_timings: total time = 56133.66 ms
|
closed
|
2023-04-27T09:11:48Z
|
2023-05-12T01:49:25Z
|
https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/215
|
[] |
YulongXia
| 6
|
robotframework/robotframework
|
automation
| 4,762
|
Unresolved library: CourseService.modules.courseServiceAPIHelper. Error generating libspec: Importing library 'CourseService.modules.courseServiceAPIHelper' failed: ModuleNotFoundError: No module named 'CourseService' Consider adding the needed paths to the "robot.pythonpath" setting and calling the "Robot Framework: Clear caches and restart" action.
|
Hi I am getting this error in my robot class:
Unresolved library: CourseService.modules.courseServiceAPIHelper. Error generating libspec: Importing library 'CourseService.modules.courseServiceAPIHelper' failed: ModuleNotFoundError: No module named 'CourseService' Consider adding the needed paths to the "robot.pythonpath" setting and calling the "Robot Framework: Clear caches and restart" action.
|
closed
|
2023-05-11T07:11:22Z
|
2023-05-31T22:33:34Z
|
https://github.com/robotframework/robotframework/issues/4762
|
[] |
vinayakvardhan
| 1
|
ray-project/ray
|
python
| 51,636
|
[<Ray component: Core|RLlib|etc...>] No action space and observation space in MASAC algorithm in the tune_example
|
### What happened + What you expected to happen
In tune_examples, when I use the multi-agent sac algorithm to tune the pendulm environment, there is no observation space and action space as I print them
### Versions / Dependencies
ray2.42.1 python 3.12.9. centos7
### Reproduction script
from torch import nn
from ray.rllib.algorithms.sac import SACConfig
from ray.rllib.core.rl_module.default_model_config import DefaultModelConfig
from ray.rllib.examples.envs.classes.multi_agent import MultiAgentPendulum
from ray.rllib.utils.metrics import (
ENV_RUNNER_RESULTS,
EPISODE_RETURN_MEAN,
NUM_ENV_STEPS_SAMPLED_LIFETIME,
)
from ray.rllib.utils.test_utils import add_rllib_example_script_args
from ray.tune.registry import register_env
parser = add_rllib_example_script_args(
default_timesteps=500000,
)
parser.set_defaults(
enable_new_api_stack=True,
num_agents=2,
)
# Use `parser` to add your own custom command line options to this script
# and (if needed) use their values to set up `config` below.
args = parser.parse_args()
register_env("multi_agent_pendulum", lambda cfg: MultiAgentPendulum(config=cfg))
config = (
SACConfig()
.environment("multi_agent_pendulum", env_config={"num_agents": args.num_agents})
.training(
initial_alpha=1.001,
# Use a smaller learning rate for the policy.
actor_lr=2e-4 * (args.num_learners or 1) ** 0.5,
critic_lr=8e-4 * (args.num_learners or 1) ** 0.5,
alpha_lr=9e-4 * (args.num_learners or 1) ** 0.5,
lr=None,
target_entropy="auto",
n_step=(2, 5),
tau=0.005,
train_batch_size_per_learner=256,
target_network_update_freq=1,
replay_buffer_config={
"type": "MultiAgentPrioritizedEpisodeReplayBuffer",
"capacity": 100000,
"alpha": 1.0,
"beta": 0.0,
},
num_steps_sampled_before_learning_starts=256,
)
.rl_module(
model_config=DefaultModelConfig(
fcnet_hiddens=[256, 256],
fcnet_activation="relu",
fcnet_kernel_initializer=nn.init.xavier_uniform_,
head_fcnet_hiddens=[],
head_fcnet_activation=None,
head_fcnet_kernel_initializer=nn.init.orthogonal_,
head_fcnet_kernel_initializer_kwargs={"gain": 0.01},
),
)
.reporting(
metrics_num_episodes_for_smoothing=5,
)
)
if args.num_agents > 0:
config.multi_agent(
policy_mapping_fn=lambda aid, *arg, **kw: f"p{aid}",
policies={f"p{i}" for i in range(args.num_agents)},
)
stop = {
NUM_ENV_STEPS_SAMPLED_LIFETIME: args.stop_timesteps,
# `episode_return_mean` is the sum of all agents/policies' returns.
f"{ENV_RUNNER_RESULTS}/{EPISODE_RETURN_MEAN}": -450.0 * args.num_agents,
}
if __name__ == "__main__":
assert (
args.num_agents > 0
), "The `--num-agents` arg must be > 0 for this script to work."
from ray.rllib.utils.test_utils import run_rllib_example_script_experiment
env = MultiAgentPendulum(config={"num_agents": args.num_agents})
print("observation_space:", env.observation_space)
print("action space is sssss:", env.action_space)
run_rllib_example_script_experiment(config, args, stop=stop)
### Issue Severity
None
|
open
|
2025-03-24T09:11:44Z
|
2025-03-24T09:11:44Z
|
https://github.com/ray-project/ray/issues/51636
|
[
"bug",
"triage"
] |
HaoningJiang-space
| 0
|
ultralytics/yolov5
|
deep-learning
| 12,865
|
Assertion Error : Image not found
|
### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I am currently trying to train a yolov6 model in google colab and I am getting error.
I referenced the question below.
(https://github.com/ultralytics/yolov5/issues/1494)
Is it because I imported the dataset from my Google Drive?
And if that's the case, is what you're saying above referring to my situation?(glenn-jocher commented on Nov 27, 2020
Oh, then you simply have network issues. You should always train with local data, never with remote buckets/drives.)


### Additional
(Traceback (most recent call last):
File "/content/YOLOv6/tools/train.py", line 142, in .
main(args)
File "/content/YOLOv6/tools/train.py", line 127, in main
trainer = Trainer(args, cfg, device)
File "/content/YOLOv6/yolov6/core/engine.py", line 94, in init
self.train_loader, self.val_loader = self.get_data_loader(self.args, self.cfg, self.data_dict)
File "/content/YOLOv6/yolov6/core/engine.py", line 406, in get_data_loader
train_loader = create_dataloader(train_path, args.img_size, args.batch_size // args.world_size, grid_size,
File "/content/YOLOv6/yolov6/data/data_load.py", line 46, in create_dataloader
dataset = TrainValDataset(
File "/content/YOLOv6/yolov6/data/seg_datasets.py", line 86, in init
self.img_paths, self.labels = self.get_imgs_labels(self.img_dir)
File "/content/YOLOv6/yolov6/data/seg_datasets.py", line 329, in get_imgs_labels
assert img_paths, f"No images found in {img_dir}."
AssertionError: No images found in /content/drive/MyDrive/[DILab_data]/Computer_Vision/Fire_detection/FST1/FST1/train/images.)
|
closed
|
2024-04-01T08:33:15Z
|
2024-10-20T19:42:37Z
|
https://github.com/ultralytics/yolov5/issues/12865
|
[
"question",
"Stale"
] |
Cho-Hong-Seok
| 5
|
nolar/kopf
|
asyncio
| 164
|
kopf can't post events
|
> <a href="https://github.com/olivier-mauras"><img align="left" height="50" src="https://avatars3.githubusercontent.com/u/1299371?v=4"></a> An issue by [olivier-mauras](https://github.com/olivier-mauras) at _2019-08-05 15:27:11+00:00_
> Original URL: https://github.com/zalando-incubator/kopf/issues/164
>
## Expected Behavior
Be able to post events either custom of defaults.
## Actual Behavior
``` text
Custom event with kopf.info()
[2019-08-05 15:09:56,422] kopf.clients.events [WARNING ] Failed to post an event. Ignoring and continuing. Error: HTTPError('Event "kopf-event-tvlgd" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace'). Event: type='Normal', reason='SA_DELETED', message='Managed service account got deleted'.
Default events:
[2019-08-05 15:12:47,217] kopf.clients.events [WARNING ] Failed to post an event. Ignoring and continuing. Error: HTTPError('Event "kopf-event-5zplb" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace'). Event: type='Normal', reason='Logging', message="Handler 'create_ns' succeeded.".
```
## Steps to Reproduce the Problem
No particular code is actually running
My controller is running with a cluster-admin role in it's own namespace.
``` yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: kopf-controllers
name: namespace-manager
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: namespace-manager-rolebinding-cluster
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: namespace-manager
namespace: kopf-controllers
```
A simple example like this gives the same result:
``` yaml
import kopf
@kopf.on.resume('', 'v1', 'namespaces')
@kopf.on.create('', 'v1', 'namespaces')
def create_fn(spec, **kwargs):
print(f"And here we are! Creating: {spec}")
return {'message': 'hello world'} # will be the new status
```
Run with: `kopf run --standalone ./example.py `
``` text
...
[2019-08-05 15:19:54,389] kopf.clients.events [WARNING ] Failed to post an event. Ignoring and continuing. Error: HTTPError('Event "kopf-event-27kj9" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace'). Event: type='Normal', reason='Logging', message='All handlers succeeded for resuming.'.
[2019-08-05 15:19:54,402] kopf.clients.events [WARNING ] Failed to post an event. Ignoring and continuing. Error: HTTPError('Event "kopf-event-vdnqq" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace'). Event: type='Normal', reason='Logging', message="Handler 'create_fn' succeeded.".
[2019-08-05 15:19:54,416] kopf.clients.events [WARNING ] Failed to post an event. Ignoring and continuing. Error: HTTPError('Event "kopf-event-xf2ht" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace'). Event: type='Normal', reason='Logging', message='All handlers succeeded for update.'.
...
```
## Specifications
- Platform: Linux
- Kubernetes version:
``` text
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.7", GitCommit:"6f482974b76db3f1e0f5d24605a9d1d38fad9a2b", GitTreeState:"clean", BuildDate:"2019-03-25T02:52:13Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.6", GitCommit:"b1d75deca493a24a2f87eb1efde1a569e52fc8d9", GitTreeState:"clean", BuildDate:"2018-12-16T04:30:10Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
```
- Python version: `Python 3.7.4`
- Python packages installed:
``` text
# pip freeze --all
aiohttp==3.5.4
aiojobs==0.2.2
async-timeout==3.0.1
attrs==19.1.0
cachetools==3.1.1
certifi==2019.6.16
chardet==3.0.4
Click==7.0
google-auth==1.6.3
idna==2.8
iso8601==0.1.12
kopf==0.20
kubernetes==10.0.0
multidict==4.5.2
oauthlib==3.0.2
pip==19.1.1
pyasn1==0.4.5
pyasn1-modules==0.2.5
pykube-ng==0.28
python-dateutil==2.8.0
PyYAML==5.1.2
requests==2.22.0
requests-oauthlib==1.2.0
rsa==4.0
setuptools==41.0.1
six==1.12.0
urllib3==1.25.3
websocket-client==0.56.0
wheel==0.33.4
yarl==1.3.0
```
---
> <a href="https://github.com/nolar"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> Commented by [nolar](https://github.com/nolar) at _2019-08-05 17:25:59+00:00_
>
[olivier-mauras](https://github.com/olivier-mauras) Thanks for reporting.
I fought with this issue few times, to no avail.
**Brief summary:** K8s events are namespaced, there are no cluster-scoped events. Also, k8s events can refer to an object via `spec.involvedObject` of type `ObjectReference`. This structure contains `namespace` field, to refer to the involved object's namespace (also, name, uid, etc).
I could not find a way to post _namespaced events for cluster-scoped resources_. It always fails with namespace mismatch (regardless of which library is used).
For an isolated example, here is an attempt to do this for `kind: Node`. The same happens for any cluster-scoped `involvedObject` though.
```bash
minikube start
curl -i \
--cacert ~/.minikube/ca.crt --cert ~/.minikube/client.crt --key ~/.minikube/client.key \
-X POST -H "Content-Type: application/json" \
https://192.168.64.15:8443/api/v1/namespaces/default/events \
-d '{
"metadata": {"namespace": "default", "generateName": "kopf-event-"},
"action": "Action?", "type": "SomeType", "reason": "SomeReason", "message": "Some message", "reportingComponent": "kopf", "reportingInstance": "dev", "source": {"component": "kopf"},
"firstTimestamp": "2019-08-05T16:44:56.078476Z", "lastTimestamp": "2019-08-05T16:44:56.078476Z", "eventTime": "2019-08-05T16:44:56.078476Z",
"involvedObject": {"kind": "Node", "name": "minikube", "uid": "minikube"}}'
HTTP/2 422
content-type: application/json
content-length: 532
date: Mon, 05 Aug 2019 17:19:09 GMT
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "Event \"kopf-event-8h65p\" is invalid: involvedObject.namespace: Invalid value: \"\": does not match event.namespace",
"reason": "Invalid",
"details": {
"name": "kopf-event-8h65p",
"kind": "Event",
"causes": [
{
"reason": "FieldValueInvalid",
"message": "Invalid value: \"\": does not match event.namespace",
"field": "involvedObject.namespace"
}
]
},
"code": 422
}
```
If I assign a namespace to the involved object, in the assumption that the cluster-scoped object kind of "exists" in all namespaces at once, then the event is created. However, it is not shown on `kubectl describe`, as the attachment is wrong.
---
Surprisingly, in the same minikube deployment, I can see such events for `kind: Node` via:
```bash
kubectl get events -o yaml | less
```
```yaml
- apiVersion: v1
count: 1
eventTime: null
firstTimestamp: "2019-08-05T16:38:01Z"
involvedObject:
kind: Node
name: minikube
uid: minikube
kind: Event
lastTimestamp: "2019-08-05T16:38:01Z"
message: Starting kubelet.
metadata:
creationTimestamp: "2019-08-05T16:38:11Z"
name: minikube.15b8143362eeb00f
namespace: default
resourceVersion: "193"
selfLink: /api/v1/namespaces/default/events/minikube.15b8143362eeb00f
uid: d23980be-d745-40c7-8378-656fe2b31cb7
reason: Starting
reportingComponent: ""
reportingInstance: ""
source:
component: kubelet
host: minikube
type: Normal
```
So, conceptually it is possible for a namespaced event to refer to a non-namespaced object. But the API does not allow to do this, or I do it somehow wrong. The API documentation gives no clues on how to post such events.
---
Any help on how such events can or should be posted, is welcomed.
What I can do here, is to fix a little bit, to post the events to the current context's namespace (i.e. with the broken link, as explained above). At least, they will be visible there via `kubectl get events`. Though, it is unclear what is the use-case of the events except for `kubectl describe`. But the error messages in Kopf will be gone.
---
> <a href="https://github.com/olivier-mauras"><img align="left" height="30" src="https://avatars3.githubusercontent.com/u/1299371?v=4"></a> Commented by [olivier-mauras](https://github.com/olivier-mauras) at _2019-08-06 06:16:03+00:00_
>
[nolar](https://github.com/nolar) Thanks for your extended reply.
Maybe I should describe my usecase to make things a bit more clear and maybe find an acceptable workaround. I actually don't mind at all the noise of the default events and had already ran kopf in quiet mode :)
Right now my controller is handling namespaces creations/update. For each namespace handled, the controller will create a set of predefined namespaced resources - service acounts, network policies.
I've seen your other comments about handling children deletions and indeed I thought it would be smart and as non convoluted as possible to have an on.delete handler for the children resources that would actually send a "delete" event to the namespace.
So here's my code
``` python
@kopf.on.delete('', 'v1', 'serviceaccounts', annotations={'custom/created_by': 'namespace-manager'})
async def sa_delete(body, namespace, **kwargs):
try:
api = kubernetes.client.CoreV1Api()
ns = api.read_namespace(name=namespace)
except ApiException as err:
sprint('ERROR', 'Exception when calling CoreV1Api->read_namespace: {}'.format(err))
kopf.info(ns.to_dict(), reason='SA_DELETED', message='Managed service account got deleted')
return
```
And so here's the problem with the event not working is that I can't "notify" the namespace of the modification.
What would be the best solution for this problem if we can't send event at this point?
---
> <a href="https://github.com/olivier-mauras"><img align="left" height="30" src="https://avatars3.githubusercontent.com/u/1299371?v=4"></a> Commented by [olivier-mauras](https://github.com/olivier-mauras) at _2019-08-06 07:07:50+00:00_
>
Answering myself, something along the following that you posted in https://github.com/nolar/kopf/issues/153 would certainly work well
```python
@kopf.on.event('', 'v1', 'services')
def service_event(meta, name, namespace, event, **_):
# filter out irrelevant services
if not meta.get('labels', {}).get('my-app'):
return
# Now, it is surely our service.
# NB: the old service is already absent at this event, so there will be no HTTP 409 conflict.
if event['type'] == 'DELETED':
_recreate_service(name, namespace)
```
---
> <a href="https://github.com/nolar"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> Commented by [nolar](https://github.com/nolar) at _2019-08-06 08:49:02+00:00_
>
[olivier-mauras](https://github.com/olivier-mauras) Ah, I see your point.
Just to clarify: Kopf is not a K8s client library, and is not going to be such. It only tries to be friendly to all existing K8s client libraries, leaving this choice to the developers.
The k8s-events support in Kopf is rudimentary: only to the extent needed for K8s<->Python integration of per-object logging (i.e. Python's `logging` system posting to `/api/v1/.../events` transparently). Plus some methods are publicly exposed also for the current object's logging.
This is the main purpose of Kopf: to marshal K8s activities/structures into Python primitives/functions/libraries, and back from Python to K8s.
Non-current object's logging is not intended as a feature (though, may work as a side effect, as in your example).
If you need to post some other sophisticated events for some other objects, you can use a client library API calls directly. In case of `kubernetes` client, it is [`CoreV1Api.create_namespaced_event`](https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CoreV1Api.md#create_namespaced_event). Some hints on the body's content can be taken from Kopf ([`kopf.clients.events`](https://github.com/nolar/kopf/blob/e53a598c856cf9f8e02ef3cb207f15efea85d5f9/kopf/clients/events.py#L45-L65)).
---
> <a href="https://github.com/nolar"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> Commented by [nolar](https://github.com/nolar) at _2019-08-06 08:50:34+00:00_
>
[olivier-mauras](https://github.com/olivier-mauras) Regarding the code example, please note, that label-filtering is now possible via the decorator kwargs, so that it will suppress excessive logging for not-matching objects: https://kopf.readthedocs.io/en/latest/handlers/#filtering
---
> <a href="https://github.com/olivier-mauras"><img align="left" height="30" src="https://avatars3.githubusercontent.com/u/1299371?v=4"></a> Commented by [olivier-mauras](https://github.com/olivier-mauras) at _2019-08-06 08:53:42+00:00_
>
[nolar](https://github.com/nolar) Thanks for your clarification, as you say what I thought as a feature is more a side effect :)
I just implemented the on.event() sample and it's works well enough in my use case
---
> <a href="https://github.com/nolar"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> Commented by [nolar](https://github.com/nolar) at _2019-09-26 14:30:41+00:00_
>
The changes are now in [`kopf==0.21`](https://github.com/nolar/kopf/releases/tag/0.21). If the issue reappears, feel free to re-open.
---
> <a href="https://github.com/cliffburdick"><img align="left" height="30" src="https://avatars1.githubusercontent.com/u/30670611?v=4"></a> Commented by [cliffburdick](https://github.com/cliffburdick) at _2020-05-28 02:17:23+00:00_
>
[nolar](https://github.com/nolar) for what it's worth, I'm seeing this in 0.27rc6. Periodically I see the following messages in the logs:
```
Failed to post an event. Ignoring and continuing. Status: 500. Message: Internal Server Error. Event: type='Normal', reason='Logging', message="Timer 'MonitorTriadSets' succeeded.".
Failed to post an event. Ignoring and continuing. Status: 500. Message: Internal Server Error. Event: type='Normal', reason='Logging', message="Timer 'MonitorTriadSets' succeeded.".
```
I have a timer called MonitorTriadSets that's set to interval=3, idle=3.
---
> <a href="https://github.com/cliffburdick"><img align="left" height="30" src="https://avatars1.githubusercontent.com/u/30670611?v=4"></a> Commented by [cliffburdick](https://github.com/cliffburdick) at _2020-08-01 04:17:01+00:00_
>
> [nolar](https://github.com/nolar) for what it's worth, I'm seeing this in 0.27rc6. Periodically I see the following messages in the logs:
>
> ```
> Failed to post an event. Ignoring and continuing. Status: 500. Message: Internal Server Error. Event: type='Normal', reason='Logging', message="Timer 'MonitorTriadSets' succeeded.".
> Failed to post an event. Ignoring and continuing. Status: 500. Message: Internal Server Error. Event: type='Normal', reason='Logging', message="Timer 'MonitorTriadSets' succeeded.".
> ```
>
> I have a timer called MonitorTriadSets that's set to interval=3, idle=3.
[nolar](https://github.com/nolar) do you think this can be reopened since it appears to be occurring on timers? I'm not sure if there's a newer version maybe, but it seems the project hasn't been updated in many months.
|
closed
|
2020-08-18T19:57:50Z
|
2022-10-12T05:13:34Z
|
https://github.com/nolar/kopf/issues/164
|
[
"bug",
"archive"
] |
kopf-archiver[bot]
| 2
|
littlecodersh/ItChat
|
api
| 314
|
怎么转发一个转发过来的消息?
|
```python
@itchat.msg_register(SHARING)
def text_reply(msg):
msg.user.send(msg.Url.replace('&', '&'))
```
尝试过这样发送,但是发出去的消息收到是个url,怎么做才能有转发的效果呢?
|
closed
|
2017-04-09T18:12:17Z
|
2017-04-10T02:36:43Z
|
https://github.com/littlecodersh/ItChat/issues/314
|
[] |
tofuli
| 1
|
xlwings/xlwings
|
automation
| 1,830
|
support python strftime format for range.number_format
|
The formatting of range in Excel is notoriously difficult to handle as it depends on the locale of the Excel application.
It would be nice to be able to support the standard python notation for datetime to format a range containing a datetime.
It is a matter of converting a string like "%m/%d/%y, %H:%M:%S" to e.g. "mm/jj/aaaa, hh:mm:ss" (if App is in a french locale).
The following monkey patch implements this idea
```python
import xlwings
from xlwings.constants import ApplicationInternational
def convert_datetime_format(app: xlwings.App, s: str) -> str:
"""Transform the string 's' with datetime format to an excel number_format string adapted to the locale of the 'app'
TODO: cache the construction of the map_py2xl and the pattern within the app
an alternative is to have the have a function convert_datetime_format(app) => return the interpret_re with
a use like:
formatter = app.convert_datetime_format()
excel_format = formatter("%m/%d/%y, %H:%M:%S")
TODO: have some automagic in the Range.number_format property (or have a Range.number_pyformat property who automatically
do the conversion.
References:
- (datetime in excel) https://support.microsoft.com/en-us/office/format-numbers-as-dates-or-times-418bd3fe-0577-47c8-8caa-b4d30c528309
- (datetime in python) https://pandas.pydata.org/docs/reference/api/pandas.Period.strftime.html
TODO: do the same but for number format
- (numbers) https://support.microsoft.com/en-us/office/number-format-codes-5026bbd6-04bc-48cd-bf33-80f18b4eae68
"""
# check minutes come after hour, otherwise not manageable in excel
minute = {"%M", "%-M", "%#M"}
hour_second = {"%I", "%H", "%-H", "%#H", "%S", "%-S", "%#S"}
minute_found = next((s.index(m) for m in minute if m in s), None)
hour_second_found = next((s.index(hs) for hs in hour_second if hs in s[:minute_found]), None)
if minute_found is not None and hour_second_found is None:
raise ValueError(
f"Unsupported python format '{s}' has it has minutes coming before hours or seconds (Excel limitations)"
)
# build a map between python formats and excel formats
codes = (None,) + app.api.International
map_py2xl = {
"%Y": codes[ApplicationInternational.xlYearCode] * 2, # year two-digit
"%y": codes[ApplicationInternational.xlYearCode] * 4, # year four-digit
"%#m": codes[ApplicationInternational.xlMonthCode] * 1, # month 1,12
"%-m": codes[ApplicationInternational.xlMonthCode] * 1, # month 1,12
"%m": codes[ApplicationInternational.xlMonthCode] * 2, # month 01,12
"%b": codes[ApplicationInternational.xlMonthCode] * 3, # month Jan
"%B": codes[ApplicationInternational.xlMonthCode] * 4, # month January
"%-d": codes[ApplicationInternational.xlDayCode] * 1, # day 1,12
"%#d": codes[ApplicationInternational.xlDayCode] * 1, # day 1,12
"%d": codes[ApplicationInternational.xlDayCode] * 2, # day 01,12
"%a": codes[ApplicationInternational.xlDayCode] * 3, # day Mon
"%A": codes[ApplicationInternational.xlDayCode] * 4, # day Monday
"%-H": codes[ApplicationInternational.xlHourCode] * 1, # hour 0,23
"%#H": codes[ApplicationInternational.xlHourCode] * 1, # hour 0,23
"%H": codes[ApplicationInternational.xlHourCode] * 2, # hour 00,23
"%#M": codes[ApplicationInternational.xlMinuteCode] * 1, # minute 0-59
"%-M": codes[ApplicationInternational.xlMinuteCode] * 1, # minute 0-59
"%M": codes[ApplicationInternational.xlMinuteCode] * 2, # minute 00-59
"%-S": codes[ApplicationInternational.xlSecondCode] * 1, # second 0-59
"%#S": codes[ApplicationInternational.xlSecondCode] * 1, # second 0-59
"%S": codes[ApplicationInternational.xlSecondCode] * 2, # second 00-59
}
# add %I notation for hours in AM/PM format (does not cover all combinations but the most natural ones)
map_py2xl = {
"%I:%M:%S": "{%H}:{%M}:{%S} AM/PM".format(**map_py2xl),
"%I:%M": "{%H}:{%M} AM/PM".format(**map_py2xl),
"%I": "{%H} AM/PM".format(**map_py2xl),
**map_py2xl,
}
# build regexp to find python formats
pattern = re.compile("|".join(map(re.escape, map_py2xl)))
def replace_py2xl(s):
# replace within a string the python format with the excel format
return pattern.sub(lambda m: map_py2xl.get(m.group()), s)
return replace_py2xl(s)
xlwings.App.convert_datetime_format = convert_datetime_format
```
And can be used as:
```python
app = list(xlwings.apps)[0]
print(app.format("%m/%d/%y, %H:%M:%S"))
print(app.format("%Y/%m %S:%M:%H"))
print(app.format("%H %M%b %B %I"))
print(app.format("%S %M%b %B %I"))
print(app.format("%M%b %B %I")) # raises a ValueError as minutes formatted before any hours/seconds
# outputs
# mm/jj/aaaa, hh:mm:ss
# aa/mm ss:mm:hh
# hh mmmmm mmmm hh AM/PM
# ss mmmmm mmmm hh AM/PM
# Traceback (most recent call last):
# File ".../expr_eval.py", line nn, in <module>
# print(app.format("%M%b %B %I")) # should raise an error
# File ".../expr_eval.py", line nn, in transform_datetime_format
# raise ValueError(f"Unsupported python format '{s}' has it has minutes coming before hours or seconds (Excel limitations)")
# ValueError: Unsupported python format '%M%b %B %I' has it has minutes coming before hours or seconds (Excel limitations)
```
|
open
|
2022-02-12T13:25:01Z
|
2022-02-14T12:53:28Z
|
https://github.com/xlwings/xlwings/issues/1830
|
[] |
sdementen
| 2
|
netbox-community/netbox
|
django
| 17,867
|
Dropdowns for all forms appear empty after upgrade
|
### Deployment Type
Self-hosted
### Triage priority
N/A
### NetBox Version
v4.1.3
### Python Version
3.12
### Steps to Reproduce
1. Beginning on v3.7.8, add data (device roles, for example)
2. Upgrade to v4.1.3
3. Attempt to create a device using the previously created device role
### Expected Behavior
The dropdowns should be populated with the corresponding data.
### Observed Behavior
Dropdowns are empty, despite having data available.


|
closed
|
2024-10-27T04:04:21Z
|
2025-02-06T03:03:47Z
|
https://github.com/netbox-community/netbox/issues/17867
|
[] |
ethnt
| 4
|
yt-dlp/yt-dlp
|
python
| 12,219
|
Youtube Cookie Issue
|
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [x] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [x] I'm reporting that yt-dlp is broken on a **supported** site
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [x] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [x] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [x] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
North America
### Provide a description that is worded well enough to be understood
Seems Youtube changed things again without warning.
No youtube content is currently being downloaded without cookies, even on latest nightly.
Note: Using cookies allows downloading.
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://www.youtube.com/watch?v=zAqvh9KaZJc']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2025.01.26.034637 from yt-dlp/yt-dlp-nightly-builds [3b4531934] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg 2024-03-28-git-5d71f97e0e-full_build-www.gyan.dev (setts), ffprobe 2024-03-28-git-5d71f97e0e-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.12.14, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.3.0, websockets-14.2
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Extractor Plugins: TTUser (TikTokUserIE)
[debug] Plugin directories: ['C:\\Users\\gojigoodra\\Documents\\CMD Files\\yt-dlp-plugins\\yt_dlp_ttuser-2024.3.18-py3-none-any.whl\\yt_dlp_plugins']
[debug] Loaded 1839 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: nightly@2025.01.26.034637 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2025.01.26.034637 from yt-dlp/yt-dlp-nightly-builds)
[youtube] Extracting URL: https://www.youtube.com/watch?v=zAqvh9KaZJc
[youtube] zAqvh9KaZJc: Downloading webpage
[youtube] zAqvh9KaZJc: Downloading tv client config
[youtube] zAqvh9KaZJc: Downloading player 37364e28
[youtube] zAqvh9KaZJc: Downloading tv player API JSON
[youtube] zAqvh9KaZJc: Downloading ios player API JSON
ERROR: [youtube] zAqvh9KaZJc: Sign in to confirm you’re not a bot. Use --cookies-from-browser or --cookies for the authentication. See https://github.com/yt-dlp/yt-dlp/wiki/FAQ#how-do-i-pass-cookies-to-yt-dlp for how to manually pass cookies. Also see https://github.com/yt-dlp/yt-dlp/wiki/Extractors#exporting-youtube-cookies for tips on effectively exporting YouTube cookies
File "yt_dlp\extractor\common.py", line 742, in extract
File "yt_dlp\extractor\youtube.py", line 4706, in _real_extract
File "yt_dlp\extractor\common.py", line 1276, in raise_no_formats
```
|
closed
|
2025-01-28T01:05:53Z
|
2025-02-17T07:33:01Z
|
https://github.com/yt-dlp/yt-dlp/issues/12219
|
[
"duplicate",
"site-bug",
"site:youtube"
] |
Bonboon229
| 8
|
roboflow/supervision
|
tensorflow
| 1,218
|
tflite with supervison
|
### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
Hi, does supervsion support tflite detection results? If not, is there a way to convert tflite outputs to the Detections class format?
### Additional
_No response_
|
closed
|
2024-05-21T15:38:18Z
|
2024-11-02T17:22:19Z
|
https://github.com/roboflow/supervision/issues/1218
|
[
"question"
] |
abichoi
| 4
|
aio-libs/aiopg
|
sqlalchemy
| 105
|
Why is only ISOLATION_LEVEL_READ_COMMITTED supported?
|
As the title states, I'm curious to know why the documentation states that only `ISOLATION_LEVEL_READ_COMMITTED` is supported.
**Edit:** I've just been looking at the `psycopg2` documentation. Would I be correct to assume that other isolation levels are actually supported, but **setting** them using the `set_isolation_level` api is unsupported?
|
closed
|
2016-02-14T21:52:31Z
|
2016-02-17T13:35:34Z
|
https://github.com/aio-libs/aiopg/issues/105
|
[] |
TimNN
| 2
|
davidteather/TikTok-Api
|
api
| 311
|
[BUG] - Getting "Access Denied" when trying to download user videos, only works with trending() method
|
I'm trying to download the videos for some specific TikTok user. When I do this, instead of the video file I get a "Access Denied" error like this:
```
Access Denied
You don't have permission to access "http://v16-web.tiktok.com/video/tos/useast2a/tos-useast2a-ve-0068c001/ba7ff9409cc74e98b41a20b81b999d4f/?" on this server.
Reference #18.e376868.1604401690.3f78b68
```
The only time the download works is if I get the trending videos, instead of a specific user. The following is a code that reproduces this bug:
```
from TikTokApi import TikTokApi
api = TikTokApi()
tiktoks = api.byUsername("yodelinghaley")
# This download will not work: Access Denied
data = api.get_Video_By_TikTok(tiktoks[0])
with open("downloads/user.mp4", 'wb') as output:
output.write(data)
tiktoks = api.trending()
# This download will work
data = api.get_Video_By_TikTok(tiktoks[0])
with open("downloads/trending.mp4", 'wb') as output:
output.write(data)
```
The file `user.mp4` will contain the Access Denied error I pasted earlier. The trending.mp4 will correctly contain a video from the trending section.
**Desktop:**
- OS: Ubuntu 18.04
- TikTokApi Version 3.6.0
|
closed
|
2020-11-03T11:13:56Z
|
2020-11-09T02:41:28Z
|
https://github.com/davidteather/TikTok-Api/issues/311
|
[
"bug"
] |
frankelia
| 3
|
CorentinJ/Real-Time-Voice-Cloning
|
deep-learning
| 1,192
|
ost vram
|
closed
|
2023-04-14T22:03:24Z
|
2023-04-14T22:03:41Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1192
|
[] |
jhuebner79
| 0
|
|
litestar-org/litestar
|
asyncio
| 3,778
|
Docs: SerializationPluginProtocol example is cut off
|
### Summary
It seems the source has changed (https://docs.advanced-alchemy.litestar.dev/latest/reference/extensions/litestar/plugins/serialization.html#advanced_alchemy.extensions.litestar.plugins.serialization.SQLAlchemySerializationPlugin.supports_type) and the example is missing quite a bit:
<img width="915" alt="image" src="https://github.com/user-attachments/assets/af00a0fe-8c2a-45ed-ba80-1a5b6799ed40">
|
open
|
2024-10-04T06:20:04Z
|
2025-03-20T15:54:57Z
|
https://github.com/litestar-org/litestar/issues/3778
|
[
"Documentation :books:",
"Help Wanted :sos:",
"Good First Issue"
] |
JacobCoffee
| 0
|
onnx/onnx
|
tensorflow
| 6,295
|
Build Error: Protobuf pinned version missing `<cstdint>` definition on its headers
|
# Bug Report
### Is the issue related to model conversion?
N/A
### Describe the bug
Have setup a C++ project that tries to build ONNX to make use of it as a library.
Using CMake and fetching the onnx repo fixing the v1.16.2.
It tries to compile onnx and its dependencies, but it throws an error while trying to build the pinned Protobuf version.
It is missing its `<cstdint>` include for a couple of files:
```
<path>/bld/_deps/protobuf-src/src/google/protobuf/port.h:186:67: error: ‘uint32_t’ was not declared in this scope
186 | : absl::disjunction<std::is_same<T, int32_t>, std::is_same<T, uint32_t>,
...
<path>/bld/_deps/protobuf-src/src/google/protobuf/compiler/java/kotlin_generator.cc:47:10: error: no declaration matches ‘uint64_t google::protobuf::compiler::java::KotlinGenerator::GetSupportedFeatures() const’
...
<path>/bld/_deps/protobuf-src/src/google/protobuf/compiler/code_generator.h:50:1: note: ‘uint64_t’ is defined in header ‘<cstdint>’; did you forget to ‘#include <cstdint>’?
```
Related issue https://github.com/google/mozc/issues/742 from the mozc project.
Related pull request https://github.com/protocolbuffers/protobuf/pull/12331.
It seems that the proper include was added on version 22.5.
It seems to be a known issue in protobuf, and it has already been addressed in newer versions.
As the latest file contains the afformentioned `<cstdint>` include.
We also appear to have an open pr that updates protobuf to v22.5 https://github.com/onnx/onnx/pull/6167.
Same setup in Windows using MSVC compiles without an issue.
### System information
- WSL Ubuntu noble 24.04 x86_64
- ONNX Version v1.16.2
- Python version: 3.12.3
- GCC version: 13.2.0 (Ubuntu 13.2.0-23ubuntu4)
- CMake version: 3.29.6
- Protobuf version: 22.3
```cmake
set(ProtobufURL https://github.com/protocolbuffers/protobuf/releases/download/v22.3/protobuf-22.3.tar.gz)
set(ProtobufSHA1 310938afea334b98d7cf915b099ec5de5ae3b5c5)
```
### Reproduction instructions
Described above
### Expected behavior
A clean compilation of the dependencies and onnx.
|
closed
|
2024-08-13T16:42:46Z
|
2025-01-04T16:01:01Z
|
https://github.com/onnx/onnx/issues/6295
|
[
"bug",
"topic: build"
] |
lmariscal
| 1
|
numpy/numpy
|
numpy
| 28,384
|
DOC: numpy.char.encode example is for numpy.strings.encode
|
### Issue with current documentation:
The example for numpy.char.encode is the following
import numpy as np
a = np.array(['aAaAaA', ' aA ', 'abBABba'])
np.strings.encode(a, encoding='cp037')
### Idea or request for content:
Update for example of numpy.char.encode instead of np.strings.encode
|
closed
|
2025-02-23T13:48:15Z
|
2025-02-23T17:04:55Z
|
https://github.com/numpy/numpy/issues/28384
|
[
"04 - Documentation"
] |
mturnansky
| 1
|
PokeAPI/pokeapi
|
graphql
| 728
|
Can't get sprites from graphql API
|
I'm making an app that needs to get the some data from all the pokemons, so i decided to use graphql for it, and while i was testing the web interface i noticed i am not able to get any sprite, the returned json from "sprites" has just null values.
My Query:
```graphql
query MyQuery {
pokemon_v2_pokemon(where: {is_default: {_eq: true}, id: {_is_null: false}}) {
id
name
pokemon_v2_pokemonspecy {
is_legendary
is_mythical
pokemon_v2_pokemoncolor {
name
}
}
pokemon_v2_pokemonsprites(where: {}) {
sprites
}
}
}
```
The returned data(only for the first pokemon):
```json
"pokemon_v2_pokemon": [
{
"id": 1,
"name": "bulbasaur",
"pokemon_v2_pokemonspecy": {
"is_legendary": false,
"is_mythical": false,
"pokemon_v2_pokemoncolor": {
"name": "green"
}
},
"pokemon_v2_pokemonsprites": [
{
"sprites": "{\"front_default\": null, \"front_female\": null, \"front_shiny\": null, \"front_shiny_female\": null, \"back_default\": null, \"back_female\": null, \"back_shiny\": null, \"back_shiny_female\": null, \"other\": {\"dream_world\": {\"front_default\": null, \"front_female\": null}, \"home\": {\"front_default\": null, \"front_female\": null, \"front_shiny\": null, \"front_shiny_female\": null}, \"official-artwork\": {\"front_default\": null}}, \"versions\": {\"generation-i\": {\"red-blue\": {\"front_default\": null, \"front_gray\": null, \"back_default\": null, \"back_gray\": null, \"front_transparent\": null, \"back_transparent\": null}, \"yellow\": {\"front_default\": null, \"front_gray\": null, \"back_default\": null, \"back_gray\": null, \"front_transparent\": null, \"back_transparent\": null}}, \"generation-ii\": {\"crystal\": {\"front_default\": null, \"front_shiny\": null, \"back_default\": null, \"back_shiny\": null, \"front_transparent\": null, \"front_shiny_transparent\": null, \"back_transparent\": null, \"back_shiny_transparent\": null}, \"gold\": {\"front_default\": null, \"front_shiny\": null, \"back_default\": null, \"back_shiny\": null, \"front_transparent\": null}, \"silver\": {\"front_default\": null, \"front_shiny\": null, \"back_default\": null, \"back_shiny\": null, \"front_transparent\": null}}, \"generation-iii\": {\"emerald\": {\"front_default\": null, \"front_shiny\": null}, \"firered-leafgreen\": {\"front_default\": null, \"front_shiny\": null, \"back_default\": null, \"back_shiny\": null}, \"ruby-sapphire\": {\"front_default\": null, \"front_shiny\": null, \"back_default\": null, \"back_shiny\": null}}, \"generation-iv\": {\"diamond-pearl\": {\"front_default\": null, \"front_female\": null, \"front_shiny\": null, \"front_shiny_female\": null, \"back_default\": null, \"back_female\": null, \"back_shiny\": null, \"back_shiny_female\": null}, \"heartgold-soulsilver\": {\"front_default\": null, \"front_female\": null, \"front_shiny\": null, \"front_shiny_female\": null, \"back_default\": null, \"back_female\": null, \"back_shiny\": null, \"back_shiny_female\": null}, \"platinum\": {\"front_default\": null, \"front_female\": null, \"front_shiny\": null, \"front_shiny_female\": null, \"back_default\": null, \"back_female\": null, \"back_shiny\": null, \"back_shiny_female\": null}}, \"generation-v\": {\"black-white\": {\"front_default\": null, \"front_female\": null, \"front_shiny\": null, \"front_shiny_female\": null, \"back_default\": null, \"back_female\": null, \"back_shiny\": null, \"back_shiny_female\": null, \"animated\": {\"front_default\": null, \"front_female\": null, \"front_shiny\": null, \"front_shiny_female\": null, \"back_default\": null, \"back_female\": null, \"back_shiny\": null, \"back_shiny_female\": null}}}, \"generation-vi\": {\"omegaruby-alphasapphire\": {\"front_default\": null, \"front_female\": null, \"front_shiny\": null, \"front_shiny_female\": null}, \"x-y\": {\"front_default\": null, \"front_female\": null, \"front_shiny\": null, \"front_shiny_female\": null}}, \"generation-vii\": {\"ultra-sun-ultra-moon\": {\"front_default\": null, \"front_female\": null, \"front_shiny\": null, \"front_shiny_female\": null}, \"icons\": {\"front_default\": null, \"front_female\": null}}, \"generation-viii\": {\"icons\": {\"front_default\": null, \"front_female\": null}}}}"
}
]
}
```
For readability, here's the json tag returned from "sprites" with syling
```json
{
"front_default":null,
"front_female":null,
"front_shiny":null,
"front_shiny_female":null,
"back_default":null,
"back_female":null,
"back_shiny":null,
"back_shiny_female":null,
"other":{
"dream_world":{
"front_default":null,
"front_female":null
},
"home":{
"front_default":null,
"front_female":null,
"front_shiny":null,
"front_shiny_female":null
},
"official-artwork":{
"front_default":null
}
},
"versions":{
"generation-i":{
"red-blue":{
"front_default":null,
"front_gray":null,
"back_default":null,
"back_gray":null,
"front_transparent":null,
"back_transparent":null
},
"yellow":{
"front_default":null,
"front_gray":null,
"back_default":null,
"back_gray":null,
"front_transparent":null,
"back_transparent":null
}
},
"generation-ii":{
"crystal":{
"front_default":null,
"front_shiny":null,
"back_default":null,
"back_shiny":null,
"front_transparent":null,
"front_shiny_transparent":null,
"back_transparent":null,
"back_shiny_transparent":null
},
"gold":{
"front_default":null,
"front_shiny":null,
"back_default":null,
"back_shiny":null,
"front_transparent":null
},
"silver":{
"front_default":null,
"front_shiny":null,
"back_default":null,
"back_shiny":null,
"front_transparent":null
}
},
"generation-iii":{
"emerald":{
"front_default":null,
"front_shiny":null
},
"firered-leafgreen":{
"front_default":null,
"front_shiny":null,
"back_default":null,
"back_shiny":null
},
"ruby-sapphire":{
"front_default":null,
"front_shiny":null,
"back_default":null,
"back_shiny":null
}
},
"generation-iv":{
"diamond-pearl":{
"front_default":null,
"front_female":null,
"front_shiny":null,
"front_shiny_female":null,
"back_default":null,
"back_female":null,
"back_shiny":null,
"back_shiny_female":null
},
"heartgold-soulsilver":{
"front_default":null,
"front_female":null,
"front_shiny":null,
"front_shiny_female":null,
"back_default":null,
"back_female":null,
"back_shiny":null,
"back_shiny_female":null
},
"platinum":{
"front_default":null,
"front_female":null,
"front_shiny":null,
"front_shiny_female":null,
"back_default":null,
"back_female":null,
"back_shiny":null,
"back_shiny_female":null
}
},
"generation-v":{
"black-white":{
"front_default":null,
"front_female":null,
"front_shiny":null,
"front_shiny_female":null,
"back_default":null,
"back_female":null,
"back_shiny":null,
"back_shiny_female":null,
"animated":{
"front_default":null,
"front_female":null,
"front_shiny":null,
"front_shiny_female":null,
"back_default":null,
"back_female":null,
"back_shiny":null,
"back_shiny_female":null
}
}
},
"generation-vi":{
"omegaruby-alphasapphire":{
"front_default":null,
"front_female":null,
"front_shiny":null,
"front_shiny_female":null
},
"x-y":{
"front_default":null,
"front_female":null,
"front_shiny":null,
"front_shiny_female":null
}
},
"generation-vii":{
"ultra-sun-ultra-moon":{
"front_default":null,
"front_female":null,
"front_shiny":null,
"front_shiny_female":null
},
"icons":{
"front_default":null,
"front_female":null
}
},
"generation-viii":{
"icons":{
"front_default":null,
"front_female":null
}
}
}
}
```
|
closed
|
2022-06-27T17:46:15Z
|
2022-06-28T13:11:35Z
|
https://github.com/PokeAPI/pokeapi/issues/728
|
[] |
ZeroKun265
| 1
|
koxudaxi/datamodel-code-generator
|
fastapi
| 1,423
|
const fields and discriminators seem lost when using `--output-model-type pydantic_v2.BaseModel` flag
|
**Describe the bug**
const fields and discriminators seem ignored when using `--output-model-type pydantic_v2.BaseModel` flag.
**To Reproduce**
Example schema:
```json
{
"openapi": "3.1.0",
"info": { "title": "FastAPI", "version": "0.1.0" },
"paths": {
"/polymorphic_dict_response": {
"get": {
"summary": "Get",
"operationId": "get_polymorphic_dict_response_get",
"responses": {
"200": {
"description": "Successful Response",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/PolymorphicDictResponse"
}
}
}
}
}
}
}
},
"components": {
"schemas": {
"ChoiceRule": {
"properties": {
"suggestions": {
"items": { "type": "string" },
"type": "array",
"title": "Suggestions",
"default": []
},
"rule": {
"const": "ChoiceRule",
"title": "Rule",
"default": "ChoiceRule"
},
"choices": {
"items": { "type": "string" },
"type": "array",
"minItems": 1,
"title": "Choices"
}
},
"type": "object",
"required": ["choices"],
"title": "ChoiceRule"
},
"DataIDRule": {
"properties": {
"suggestions": {
"items": { "type": "string" },
"type": "array",
"title": "Suggestions",
"default": []
},
"rule": {
"const": "DataIDRule",
"title": "Rule",
"default": "DataIDRule"
}
},
"type": "object",
"title": "DataIDRule"
},
"PolymorphicDictResponse": {
"properties": {
"name": { "type": "string", "title": "Name" },
"rules": {
"additionalProperties": {
"oneOf": [
{ "$ref": "#/components/schemas/DataIDRule" },
{ "$ref": "#/components/schemas/ChoiceRule" }
],
"discriminator": {
"propertyName": "rule",
"mapping": {
"ChoiceRule": "#/components/schemas/ChoiceRule",
"DataIDRule": "#/components/schemas/DataIDRule"
}
}
},
"type": "object",
"title": "Rules"
}
},
"type": "object",
"required": ["name", "rules"],
"title": "PolymorphicDictResponse"
}
}
}
}
```
Used commandline:
```
$ datamodel-codegen \
--input ./openapi.json \
--input-file-type openapi \
--target-python-version 3.10 \
--output-model-type pydantic_v2.BaseModel \
--use-one-literal-as-default \
--enum-field-as-literal one \
--output _generated_models.py \
--validation
```
**Expected behavior**
The generated output was:
```
class ChoiceRule(BaseModel):
suggestions: Optional[List[str]] = Field([], title='Suggestions')
rule: str = Field('ChoiceRule', title='Rule')
choices: List[str] = Field(..., min_length=1, title='Choices')
class DataIDRule(BaseModel):
suggestions: Optional[List[str]] = Field([], title='Suggestions')
rule: str = Field('DataIDRule', title='Rule')
class PolymorphicDictResponse(BaseModel):
name: str = Field(..., title='Name')
rules: Dict[str, Union[DataIDRule, ChoiceRule]] = Field(..., title='Rules')
```
I expected:
```
class ChoiceRule(BaseModel):
suggestions: Optional[List[str]] = Field([], title='Suggestions')
rule: Literal["ChoiceRule"] = "ChoiceRule" <=== The important part
choices: List[str] = Field(..., min_length=1, title='Choices')
class DataIDRule(BaseModel):
suggestions: Optional[List[str]] = Field([], title='Suggestions')
rule: Literal['DataIDRule'] = 'DataIdRule' <=== The important part
class PolymorphicDictResponse(BaseModel):
name: str = Field(..., title='Name')
rules: Dict[
str,
Annotated[
Union[
DataIDRule,
ChoiceRule,
],
Field(discriminator="rule"), <=== The important part
],
] = Field(..., title="Rules")
```
**Version:**
- OS: Ubuntu
- Python version: Python 3.10.6
- datamodel-code-generator version: 0.21.1
**Additional context**
I have attached a script and a test case that pre-produces the problem. It defines a FastAPI application to illustrate the problem. In the application I am defining pydantic types as response-models, then exporting the resulting openapi schema, and then uses this for generating the types once again.
When I parse a response from the same app, the polymorphic types get mixed up. The wrong classes gets instantiated.
When you follow the trail, you see that FastAPI generated openapi specification correctly states that the `rule` fields are consts, and also describes the discriminator for the Rule components. This information seems lost after datamodel-code-generation.
Additionally, this worked well, when FastAPI used pydantic 1.x, as it would emit a single valued enum field in the openapi spec. And running the datamodel-code-generator without the '--output-model-type' gave the desired result.
How to reproduce:
- Run generate_models.py
- Run test_pydantic.py
[test_pydantic.zip](https://github.com/koxudaxi/datamodel-code-generator/files/12028939/test_pydantic.zip)
|
closed
|
2023-07-12T14:24:24Z
|
2023-08-24T08:23:43Z
|
https://github.com/koxudaxi/datamodel-code-generator/issues/1423
|
[] |
tomjelen
| 1
|
deepfakes/faceswap
|
deep-learning
| 1,296
|
Faceswap Gui Crashes On Launch
|
Hello, I've been following the faceswap install guide as I had to start a new virtual environment again, but every-time I load up the gui it crashes. I've also tried to update everything, but nothing seems to be working and it takes me around in loops. I'm using a Mac on CPU, so it's probably a dinosaur here, but I'm hoping it can still be resolved. Here's my crash report:
```
01/17/2023 11:39:46 MainProcess MainThread logger log_setup INFO Log level set to: INFO
01/17/2023 11:39:50 MainProcess MainThread __init__ <module> DEBUG Creating converter from 7 to 5
01/17/2023 11:39:50 MainProcess MainThread __init__ <module> DEBUG Creating converter from 5 to 7
01/17/2023 11:39:50 MainProcess MainThread __init__ <module> DEBUG Creating converter from 7 to 5
01/17/2023 11:39:50 MainProcess MainThread __init__ <module> DEBUG Creating converter from 5 to 7
Traceback (most recent call last):
File "/Users/USERS/opt/anaconda3/envs/faceswap/lib/python3.8/site-packages/requests/compat.py", line 11, in <module>
import chardet
ModuleNotFoundError: No module named 'chardet'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/USERS/faceswap/lib/cli/launcher.py", line 224, in execute_script
script = self._import_script()
File "/Users/USERS/faceswap/lib/cli/launcher.py", line 46, in _import_script
self._test_for_tf_version()
File "/Users/USERS/faceswap/lib/cli/launcher.py", line 95, in _test_for_tf_version
import tensorflow as tf # noqa pylint:disable=import-outside-toplevel,unused-import
File "/Users/USERS/opt/anaconda3/envs/faceswap/lib/python3.8/site-packages/tensorflow/__init__.py", line 51, in <module>
from ._api.v2 import compat
File "/Users/USERS/opt/anaconda3/envs/faceswap/lib/python3.8/site-packages/tensorflow/_api/v2/compat/__init__.py", line 37, in <module>
from . import v1
File "/Users/USERS/opt/anaconda3/envs/faceswap/lib/python3.8/site-packages/tensorflow/_api/v2/compat/v1/__init__.py", line 30, in <module>
from . import compat
File "/Users/USERS/opt/anaconda3/envs/faceswap/lib/python3.8/site-packages/tensorflow/_api/v2/compat/v1/compat/__init__.py", line 38, in <module>
from . import v2
File "/Users/USERS/opt/anaconda3/envs/faceswap/lib/python3.8/site-packages/tensorflow/_api/v2/compat/v1/compat/v2/__init__.py", line 28, in <module>
from tensorflow._api.v2.compat.v2 import __internal__
File "/Users/USERS/opt/anaconda3/envs/faceswap/lib/python3.8/site-packages/tensorflow/_api/v2/compat/v2/__init__.py", line 33, in <module>
from . import compat
File "/Users/USERS/opt/anaconda3/envs/faceswap/lib/python3.8/site-packages/tensorflow/_api/v2/compat/v2/compat/__init__.py", line 38, in <module>
from . import v2
File "/Users/USERS/opt/anaconda3/envs/faceswap/lib/python3.8/site-packages/tensorflow/_api/v2/compat/v2/compat/v2/__init__.py", line 37, in <module>
from tensorflow._api.v2.compat.v2 import distribute
File "/Users/USERS/opt/anaconda3/envs/faceswap/lib/python3.8/site-packages/tensorflow/_api/v2/compat/v2/distribute/__init__.py", line 182, in <module>
from . import experimental
File "/Users/USERS/opt/anaconda3/envs/faceswap/lib/python3.8/site-packages/tensorflow/_api/v2/compat/v2/distribute/experimental/__init__.py", line 10, in <module>
from . import rpc
File "/Users/USERS/opt/anaconda3/envs/faceswap/lib/python3.8/site-packages/tensorflow/_api/v2/compat/v2/distribute/experimental/rpc/__init__.py", line 8, in <module>
from tensorflow.python.distribute.experimental.rpc.rpc_ops import Client
File "/Users/USERS/opt/anaconda3/envs/faceswap/lib/python3.8/site-packages/tensorflow/python/distribute/experimental/__init__.py", line 22, in <module>
from tensorflow.python.distribute.failure_handling import failure_handling
File "/Users/USERS/opt/anaconda3/envs/faceswap/lib/python3.8/site-packages/tensorflow/python/distribute/failure_handling/failure_handling.py", line 33, in <module>
from tensorflow.python.distribute.failure_handling import gce_util
File "/Users/USERS/opt/anaconda3/envs/faceswap/lib/python3.8/site-packages/tensorflow/python/distribute/failure_handling/gce_util.py", line 20, in <module>
import requests
File "/Users/USERS/opt/anaconda3/envs/faceswap/lib/python3.8/site-packages/requests/__init__.py", line 45, in <module>
from .exceptions import RequestsDependencyWarning
File "/Users/USERS/opt/anaconda3/envs/faceswap/lib/python3.8/site-packages/requests/exceptions.py", line 9, in <module>
from .compat import JSONDecodeError as CompatJSONDecodeError
File "/Users/USERS/opt/anaconda3/envs/faceswap/lib/python3.8/site-packages/requests/compat.py", line 13, in <module>
import charset_normalizer as chardet
File "/Users/USERS/opt/anaconda3/envs/faceswap/lib/python3.8/site-packages/charset_normalizer/__init__.py", line 23, in <module>
from charset_normalizer.api import from_fp, from_path, from_bytes, normalize
File "/Users/USERS/opt/anaconda3/envs/faceswap/lib/python3.8/site-packages/charset_normalizer/api.py", line 10, in <module>
from charset_normalizer.md import mess_ratio
AttributeError: partially initialized module 'charset_normalizer' has no attribute 'md__mypyc' (most likely due to a circular import)
============ System Information ============
backend: amd
encoding: UTF-8
git_branch: master
git_commits: 80f6328 Bugfixes - plugin.train.unbalanced - decoder b - gui.stats - crash when log folder deleted - setup.py - install gcc=12.1.0 on linux (scipy dep) - setup.py - don't test ROCm unless valid - typofix - cli_args extract singleprocess help typing - lib.gui.analysis.event_reader - lib.gpu_stats.directml - make platform specific unit tests - lib.gui.analysis.event_reader. 28cb2fc Unhide options for ROCm. c455601 Linux - ROCm (AMD) support. edd7e52 docs: typofix. 0dbeafb bugfix: sysinfo - Fix getting free Vram on Nvidia - Don't halt sysinfo on GPU stats error - lib.sysinfo unit test
gpu_cuda: No global version found. Check Conda packages for Conda Cuda
gpu_cudnn: No global version found. Check Conda packages for Conda cuDNN
gpu_devices: GPU_0: llvm_cpu.0 - CPU (via LLVM) (experimental)
gpu_devices_active: GPU_0
gpu_driver: No Driver Found
gpu_vram: GPU_0: 4096MB (4096MB free)
os_machine: x86_64
os_platform: macOS-10.15.6-x86_64-i386-64bit
os_release: 19.6.0
py_command: faceswap.py gui
py_conda_version: conda 22.11.1
py_implementation: CPython
py_version: 3.8.15
py_virtual_env: True
sys_cores: 8
sys_processor: i386
sys_ram: Total: 16384MB, Available: 8017MB, Used: 8203MB, Free: 60MB
=============== Pip Packages ===============
absl-py @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_09tbpzfe9v/croot/absl-py_1666362946999/work
aiohttp @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_690hlorvpy/croot/aiohttp_1670009554132/work
aiosignal @ file:///tmp/build/80754af9/aiosignal_1637843061372/work
astunparse==1.6.3
async-timeout @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_732x52axrd/croots/recipe/async-timeout_1664876366763/work
attrs @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_33k1uces4n/croot/attrs_1668696162258/work
blinker==1.4
brotlipy==0.7.0
cachetools==4.2.4
certifi @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_477u68wvzm/croot/certifi_1671487773341/work/certifi
cffi @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_1b0qzba5nr/croot/cffi_1670423213150/work
charset-normalizer==3.0.1
click @ file:///opt/concourse/worker/volumes/live/709f530a-8ffd-4e02-4f5f-9b3cfddd0e0b/volume/click_1646056616410/work
cloud-tpu-client==0.10
cloudpickle @ file:///tmp/build/80754af9/cloudpickle_1632508026186/work
cryptography @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_0ayq9hu973/croot/cryptography_1673298756837/work
cycler @ file:///tmp/build/80754af9/cycler_1637851556182/work
decorator @ file:///opt/conda/conda-bld/decorator_1643638310831/work
dm-tree @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_1fp7e2h87y/croot/dm-tree_1671027438289/work
enum34==1.1.10
fastcluster @ file:///Users/runner/miniforge3/conda-bld/fastcluster_1646148981469/work
ffmpy==0.3.0
flatbuffers==23.1.4
flit_core @ file:///opt/conda/conda-bld/flit-core_1644941570762/work/source/flit_core
fonttools==4.25.0
frozenlist @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_5eiq5594pj/croot/frozenlist_1670004516635/work
gast @ file:///Users/ktietz/demo/mc3/conda-bld/gast_1628588903283/work
google-api-core==1.34.0
google-api-python-client==1.8.0
google-auth @ file:///opt/conda/conda-bld/google-auth_1646735974934/work
google-auth-httplib2==0.1.0
google-auth-oauthlib==0.4.6
google-pasta @ file:///Users/ktietz/demo/mc3/conda-bld/google-pasta_1630577991354/work
googleapis-common-protos==1.58.0
grpcio==1.51.1
h5py @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_7fnj4n39p5/croots/recipe/h5py_1659091379933/work
httplib2==0.21.0
idna @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_00jf0h4zbt/croot/idna_1666125573348/work
imageio @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_5460df32-49d6-4565-9afb-ddc7de276101dvh8sph9/croots/recipe/imageio_1658785049436/work
imageio-ffmpeg @ file:///home/conda/feedstock_root/build_artifacts/imageio-ffmpeg_1673483481485/work
importlib-metadata==6.0.0
joblib @ file:///home/conda/feedstock_root/build_artifacts/joblib_1663332044897/work
keras @ file:///Users/ec2-user/miniconda3/conda-bld/keras_1669750590284/work/keras-2.10.0-py2.py3-none-any.whl
Keras-Applications==1.0.8
Keras-Preprocessing @ file:///tmp/build/80754af9/keras-preprocessing_1612283640596/work
kiwisolver @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_e26jwrjf6j/croot/kiwisolver_1672387151391/work
libclang==15.0.6.1
Markdown @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_90m_zl2p10/croot/markdown_1671541913695/work
MarkupSafe @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_d4a9444f-bd4c-4043-b47d-cede33979b0fve7bm42r/croots/recipe/markupsafe_1654597878200/work
matplotlib @ file:///opt/concourse/worker/volumes/live/f670ab63-e220-495e-450b-14c9591f195b/volume/matplotlib-suite_1634667037960/work
mkl-fft==1.3.1
mkl-random @ file:///opt/concourse/worker/volumes/live/f196a661-8e33-4aac-63bc-efb2ff50e035/volume/mkl_random_1626186080069/work
mkl-service==2.4.0
multidict @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_81upqwhecp/croot/multidict_1665674236996/work
munkres==1.1.4
numexpr @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_cef3ah6r8w/croot/numexpr_1668713880672/work
numpy @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_e3f85f54-0572-4c0e-b190-4bc4766fc3fenewxarhq/croots/recipe/numpy_and_numpy_base_1652801682879/work
nvidia-ml-py==11.525.84
oauth2client==4.1.3
oauthlib==3.2.2
opencv-python==4.7.0.68
opt-einsum @ file:///tmp/build/80754af9/opt_einsum_1621500238896/work
packaging==23.0
pexpect @ file:///tmp/build/80754af9/pexpect_1605563209008/work
Pillow==9.3.0
plaidml==0.7.0
plaidml-keras==0.7.0
protobuf==3.20.1
psutil @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_c9b604bf-685f-47f6-8304-238e4e70557e1o7mmsot/croots/recipe/psutil_1656431274701/work
ptyprocess @ file:///tmp/build/80754af9/ptyprocess_1609355006118/work/dist/ptyprocess-0.7.0-py2.py3-none-any.whl
pyasn1 @ file:///Users/ktietz/demo/mc3/conda-bld/pyasn1_1629708007385/work
pyasn1-modules==0.2.8
pycparser @ file:///tmp/build/80754af9/pycparser_1636541352034/work
PyJWT @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_eec47dd9-fcc8-4e06-bbca-6138d11dbbcch6zasfc4/croots/recipe/pyjwt_1657544589510/work
pyOpenSSL @ file:///opt/conda/conda-bld/pyopenssl_1643788558760/work
pyparsing @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_3a17y2delq/croots/recipe/pyparsing_1661452538853/work
PySocks @ file:///opt/concourse/worker/volumes/live/85a5b906-0e08-41d9-6f59-084cee4e9492/volume/pysocks_1594394636991/work
python-dateutil @ file:///tmp/build/80754af9/python-dateutil_1626374649649/work
PyYAML==6.0
requests==2.28.2
requests-oauthlib==1.3.1
rsa==4.9
scikit-learn @ file:///opt/concourse/worker/volumes/live/09b793c0-41c8-469b-7e31-b4a39bc68d95/volume/scikit-learn_1642617120730/work
scipy @ file:///opt/concourse/worker/volumes/live/7554f9b4-1616-4d3d-50f8-914545975614/volume/scipy_1641555024289/work
six @ file:///tmp/build/80754af9/six_1644875935023/work
tensorboard==2.10.1
tensorboard-data-server @ file:///var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_b8sq3zgkhr/croot/tensorboard-data-server_1670853593214/work/tensorboard_data_server-0.6.1-py3-none-macosx_10_9_x86_64.whl
tensorboard-plugin-wit==1.8.1
tensorflow @ file:///Users/ec2-user/miniconda3/conda-bld/tensorflow-base_1669830675347/work/tensorflow_pkg/tensorflow-2.10.0-cp38-cp38-macosx_10_14_x86_64.whl
tensorflow-cpu==2.10.1
tensorflow-estimator @ file:///Users/ec2-user/miniconda3/conda-bld/tensorflow-estimator_1669751680663/work/tensorflow_estimator-2.10.0-py2.py3-none-any.whl
tensorflow-io-gcs-filesystem==0.29.0
tensorflow-probability @ file:///tmp/build/80754af9/tensorflow-probability_1633017132682/work
termcolor==2.2.0
threadpoolctl @ file:///home/conda/feedstock_root/build_artifacts/threadpoolctl_1643647933166/work
tornado @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_1fimz6o0gc/croots/recipe/tornado_1662061695695/work
tqdm @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_2adqcbsqqd/croots/recipe/tqdm_1664392689227/work
typing_extensions @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_4b7xacf029/croot/typing_extensions_1669923792404/work
uritemplate==3.0.1
urllib3==1.26.14
Werkzeug @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_322nlonnpf/croot/werkzeug_1671215993374/work
wrapt @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_1ade1f68-8354-4db8-830b-ff3072015779vd_2hm7k/croots/recipe/wrapt_1657814407132/work
yarl @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_d8a27nidjc/croots/recipe/yarl_1661437080982/work
zipp @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_b71z79bye2/croot/zipp_1672387125902/work
============== Conda Packages ==============
# packages in environment at /Users/USERS/opt/anaconda3/envs/faceswap:
#
# Name Version Build Channel
_tflow_select 2.2.0 eigen
abseil-cpp 20211102.0 he9d5cce_0
absl-py 1.3.0 py38hecd8cb5_0
aiohttp 3.8.3 py38h6c40b1e_0
aiosignal 1.2.0 pyhd3eb1b0_0
astunparse 1.6.3 py_0
async-timeout 4.0.2 py38hecd8cb5_0
attrs 22.1.0 py38hecd8cb5_0
blas 1.0 mkl
blinker 1.4 py38hecd8cb5_0
brotli 1.0.9 hca72f7f_7
brotli-bin 1.0.9 hca72f7f_7
brotlipy 0.7.0 py38h9ed2024_1003
bzip2 1.0.8 h0d85af4_4 conda-forge
c-ares 1.18.1 hca72f7f_0
ca-certificates 2022.10.11 hecd8cb5_0
cachetools 4.2.4 pypi_0 pypi
certifi 2022.12.7 py38hecd8cb5_0
cffi 1.15.1 py38h6c40b1e_3
charset-normalizer 3.0.1 pypi_0 pypi
click 8.0.4 py38hecd8cb5_0
cloud-tpu-client 0.10 pypi_0 pypi
cloudpickle 2.0.0 pyhd3eb1b0_0
cryptography 38.0.4 py38hf6deb26_0
cycler 0.11.0 pyhd3eb1b0_0
decorator 5.1.1 pyhd3eb1b0_0
dm-tree 0.1.7 py38hcec6c5f_1
enum34 1.1.10 pypi_0 pypi
fastcluster 1.2.6 py38h52e9a12_0 conda-forge
ffmpeg 4.2.2 h97e5cf8_0
flatbuffers 23.1.4 pypi_0 pypi
flit-core 3.6.0 pyhd3eb1b0_0
fonttools 4.25.0 pyhd3eb1b0_0
freetype 2.12.1 hd8bbffd_0
frozenlist 1.3.3 py38h6c40b1e_0
gast 0.3.3 pypi_0 pypi
gettext 0.21.1 h8a4c099_0 conda-forge
giflib 5.2.1 haf1e3a3_0
gmp 6.2.1 h2e338ed_0 conda-forge
gnutls 3.6.13 h756fd2b_1 conda-forge
google-api-core 1.34.0 pypi_0 pypi
google-api-python-client 1.8.0 pypi_0 pypi
google-auth 1.35.0 pypi_0 pypi
google-auth-httplib2 0.1.0 pypi_0 pypi
google-auth-oauthlib 0.4.6 pypi_0 pypi
google-pasta 0.2.0 pyhd3eb1b0_0
googleapis-common-protos 1.58.0 pypi_0 pypi
grpc-cpp 1.46.1 h067a048_0
grpcio 1.51.1 pypi_0 pypi
h5py 2.10.0 pypi_0 pypi
hdf5 1.10.6 hdbbcd12_0
httplib2 0.21.0 pypi_0 pypi
icu 58.2 h0a44026_3
idna 3.4 py38hecd8cb5_0
imageio 2.19.3 py38hecd8cb5_0
imageio-ffmpeg 0.4.8 pyhd8ed1ab_0 conda-forge
importlib-metadata 6.0.0 pypi_0 pypi
intel-openmp 2021.4.0 hecd8cb5_3538
joblib 1.2.0 pyhd8ed1ab_0 conda-forge
jpeg 9e hca72f7f_0
keras 2.10.0 py38hecd8cb5_0
keras-applications 1.0.8 pypi_0 pypi
keras-preprocessing 1.1.2 pyhd3eb1b0_0
kiwisolver 1.4.4 py38hcec6c5f_0
krb5 1.19.2 hcd88c3b_0
lame 3.100 hb7f2c08_1003 conda-forge
lcms2 2.12 hf1fd2bf_0
lerc 3.0 he9d5cce_0
libbrotlicommon 1.0.9 hca72f7f_7
libbrotlidec 1.0.9 hca72f7f_7
libbrotlienc 1.0.9 hca72f7f_7
libclang 15.0.6.1 pypi_0 pypi
libcurl 7.86.0 ha585b31_0
libcxx 14.0.6 h9765a3e_0
libdeflate 1.8 h9ed2024_5
libedit 3.1.20221030 h6c40b1e_0
libev 4.33 h9ed2024_1
libffi 3.4.2 hecd8cb5_6
libgfortran 3.0.1 h93005f0_2
libiconv 1.17 hac89ed1_0 conda-forge
libnghttp2 1.46.0 ha29bfda_0
libopus 1.3.1 hc929b4f_1 conda-forge
libpng 1.6.37 ha441bb4_0
libprotobuf 3.20.1 h8346a28_0
libssh2 1.10.0 h0a4fc7d_0
libtiff 4.5.0 h2cd0358_0
libvpx 1.7.0 h378b8a2_0
libwebp 1.2.4 h56c3ce4_0
libwebp-base 1.2.4 hca72f7f_0
llvm-openmp 15.0.7 h61d9ccf_0 conda-forge
lz4-c 1.9.4 hcec6c5f_0
markdown 3.4.1 py38hecd8cb5_0
markupsafe 2.1.1 py38hca72f7f_0
matplotlib 3.4.3 py38hecd8cb5_0
matplotlib-base 3.4.3 py38h0a11d32_0
mkl 2021.4.0 hecd8cb5_637
mkl-service 2.4.0 py38h9ed2024_0
mkl_fft 1.3.1 py38h4ab4a9b_0
mkl_random 1.2.2 py38hb2f4e1b_0
multidict 6.0.2 py38hca72f7f_0
munkres 1.1.4 py_0
ncurses 6.3 hca72f7f_3
nettle 3.6 hedd7734_0 conda-forge
numexpr 2.8.4 py38he696674_0
numpy 1.16.0 pypi_0 pypi
numpy-base 1.22.3 py38h3b1a694_0
oauth2client 4.1.3 pypi_0 pypi
oauthlib 3.2.2 pypi_0 pypi
openh264 2.1.1 h8346a28_0
openssl 1.1.1s hca72f7f_0
opt_einsum 3.3.0 pyhd3eb1b0_1
packaging 23.0 pypi_0 pypi
pexpect 4.8.0 pyhd3eb1b0_3
pillow 9.3.0 py38h81888ad_1
pip 22.3.1 py38hecd8cb5_0
plaidml 0.7.0 pypi_0 pypi
plaidml-keras 0.7.0 pypi_0 pypi
protobuf 3.19.6 pypi_0 pypi
psutil 5.9.0 py38hca72f7f_0
ptyprocess 0.7.0 pyhd3eb1b0_2
pyasn1 0.4.8 pyhd3eb1b0_0
pyasn1-modules 0.2.8 pypi_0 pypi
pycparser 2.21 pyhd3eb1b0_0
pyjwt 2.4.0 py38hecd8cb5_0
pyopenssl 22.0.0 pyhd3eb1b0_0
pyparsing 3.0.9 py38hecd8cb5_0
pysocks 1.7.1 py38_1
python 3.8.15 h218abb5_2
python-dateutil 2.8.2 pyhd3eb1b0_0
python-flatbuffers 2.0 pyhd3eb1b0_0
python_abi 3.8 2_cp38 conda-forge
pyyaml 6.0 pypi_0 pypi
re2 2022.04.01 he9d5cce_0
readline 8.2 hca72f7f_0
requests 2.28.2 pypi_0 pypi
requests-oauthlib 1.3.1 pypi_0 pypi
rsa 4.9 pypi_0 pypi
scikit-learn 1.0.2 py38hae1ba45_1
scipy 1.7.3 py38h8c7af03_0
setuptools 65.6.3 py38hecd8cb5_0
six 1.16.0 pyhd3eb1b0_1
snappy 1.1.9 he9d5cce_0
sqlite 3.40.1 h880c91c_0
tensorboard 2.10.1 pypi_0 pypi
tensorboard-data-server 0.6.1 py38h7242b5c_0
tensorboard-plugin-wit 1.8.1 pypi_0 pypi
tensorflow 2.2.3 pypi_0 pypi
tensorflow-base 2.10.0 eigen_py38h61e1807_0
tensorflow-cpu 2.10.1 pypi_0 pypi
tensorflow-estimator 2.10.0 py38hecd8cb5_0
tensorflow-io-gcs-filesystem 0.29.0 pypi_0 pypi
tensorflow-probability 0.14.0 pyhd3eb1b0_0
termcolor 2.2.0 pypi_0 pypi
threadpoolctl 3.1.0 pyh8a188c0_0 conda-forge
tk 8.6.12 h5d9f67b_0
tornado 6.2 py38hca72f7f_0
tqdm 4.64.1 py38hecd8cb5_0
typing-extensions 4.4.0 py38hecd8cb5_0
typing_extensions 4.4.0 py38hecd8cb5_0
uritemplate 3.0.1 pypi_0 pypi
urllib3 1.26.14 pypi_0 pypi
werkzeug 2.2.2 py38hecd8cb5_0
wheel 0.35.1 pyhd3eb1b0_0
wrapt 1.14.1 py38hca72f7f_0
x264 1!157.20191217 h1de35cc_0
xz 5.2.8 h6c40b1e_0
yarl 1.8.1 py38hca72f7f_0
zipp 3.11.0 py38hecd8cb5_0
zlib 1.2.13 h4dc903c_0
zstd 1.5.2 hcb37349_0
================= Configs ==================
--------- convert.ini ---------
[color.color_transfer]
clip: True
preserve_paper: True
[color.match_hist]
threshold: 99.0
[color.manual_balance]
colorspace: HSV
balance_1: 0.0
balance_2: 0.0
balance_3: 0.0
contrast: 0.0
brightness: 0.0
[writer.pillow]
format: png
draw_transparent: False
optimize: False
gif_interlace: True
jpg_quality: 75
png_compress_level: 3
tif_compression: tiff_deflate
[writer.ffmpeg]
container: mp4
codec: libx264
crf: 23
preset: medium
tune: None
profile: auto
level: auto
skip_mux: False
[writer.gif]
fps: 25
loop: 0
palettesize: 256
subrectangles: False
[writer.opencv]
format: png
draw_transparent: False
jpg_quality: 75
png_compress_level: 3
[mask.mask_blend]
type: normalized
kernel_size: 3
passes: 4
threshold: 6
erosion: 1.2
[mask.box_blend]
type: gaussian
distance: 11.0
radius: 5.0
passes: 1
[scaling.sharpen]
method: unsharp_mask
amount: 150
radius: 0.3
threshold: 5.0
--------- gui.ini ---------
[global]
fullscreen: False
tab: extract
options_panel_width: 30
console_panel_height: 20
icon_size: 14
font: default
font_size: 9
autosave_last_session: prompt
timeout: 120
auto_load_model_stats: True
--------- .faceswap ---------
backend: amd
--------- extract.ini ---------
[global]
allow_growth: False
[detect.mtcnn]
minsize: 20
scalefactor: 0.709
batch-size: 8
threshold_1: 0.6
threshold_2: 0.7
threshold_3: 0.7
[detect.cv2_dnn]
confidence: 50
[detect.s3fd]
confidence: 70
batch-size: 4
[align.fan]
batch-size: 12
[mask.unet_dfl]
batch-size: 8
[mask.vgg_obstructed]
batch-size: 2
[mask.vgg_clear]
batch-size: 6
[mask.bisenet_fp]
batch-size: 8
weights: faceswap
include_ears: False
include_hair: False
include_glasses: True
--------- train.ini ---------
[global]
centering: face
coverage: 68.75
icnr_init: False
conv_aware_init: False
optimizer: adam
learning_rate: 5e-05
epsilon_exponent: -7
reflect_padding: False
allow_growth: False
mixed_precision: False
nan_protection: True
convert_batchsize: 16
[global.loss]
loss_function: ssim
mask_loss_function: mse
l2_reg_term: 100
eye_multiplier: 3
mouth_multiplier: 2
penalized_mask_loss: True
mask_type: extended
mask_blur_kernel: 3
mask_threshold: 4
learn_mask: False
[model.phaze_a]
output_size: 128
shared_fc: None
enable_gblock: True
split_fc: True
split_gblock: False
split_decoders: False
enc_architecture: fs_original
enc_scaling: 40
enc_load_weights: True
bottleneck_type: dense
bottleneck_norm: None
bottleneck_size: 1024
bottleneck_in_encoder: True
fc_depth: 1
fc_min_filters: 1024
fc_max_filters: 1024
fc_dimensions: 4
fc_filter_slope: -0.5
fc_dropout: 0.0
fc_upsampler: upsample2d
fc_upsamples: 1
fc_upsample_filters: 512
fc_gblock_depth: 3
fc_gblock_min_nodes: 512
fc_gblock_max_nodes: 512
fc_gblock_filter_slope: -0.5
fc_gblock_dropout: 0.0
dec_upscale_method: subpixel
dec_norm: None
dec_min_filters: 64
dec_max_filters: 512
dec_filter_slope: -0.45
dec_res_blocks: 1
dec_output_kernel: 5
dec_gaussian: True
dec_skip_last_residual: True
freeze_layers: keras_encoder
load_layers: encoder
fs_original_depth: 4
fs_original_min_filters: 128
fs_original_max_filters: 1024
mobilenet_width: 1.0
mobilenet_depth: 1
mobilenet_dropout: 0.001
[model.realface]
input_size: 64
output_size: 128
dense_nodes: 1536
complexity_encoder: 128
complexity_decoder: 512
[model.dfl_sae]
input_size: 128
clipnorm: True
architecture: df
autoencoder_dims: 0
encoder_dims: 42
decoder_dims: 21
multiscale_decoder: False
[model.unbalanced]
input_size: 128
lowmem: False
clipnorm: True
nodes: 1024
complexity_encoder: 128
complexity_decoder_a: 384
complexity_decoder_b: 512
[model.dlight]
features: best
details: good
output_size: 256
[model.villain]
lowmem: False
[model.dfaker]
output_size: 128
[model.original]
lowmem: False
[model.dfl_h128]
lowmem: False
[trainer.original]
preview_images: 14
zoom_amount: 5
rotation_range: 10
shift_range: 5
flip_chance: 50
color_lightness: 30
color_ab: 8
color_clahe_chance: 50
color_clahe_max_size: 4
```
|
closed
|
2023-01-17T12:07:09Z
|
2023-01-20T13:05:03Z
|
https://github.com/deepfakes/faceswap/issues/1296
|
[] |
DrDreGFunk
| 1
|
apache/airflow
|
data-science
| 47,810
|
Old asset is also displayed on the UI after updating the method name decorated by an asset decorator
|
### Apache Airflow version
3.0.0
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
Old asset is also displayed on the UI after updating the method name decorated by an asset decorator
<img width="1707" alt="Image" src="https://github.com/user-attachments/assets/46de77aa-64a9-4335-a681-32ff00ca4a73" />
<img width="1434" alt="Image" src="https://github.com/user-attachments/assets/a6624fee-8a20-42f4-8099-fccca99e5333" />
### What you think should happen instead?
Only latest asset should be displayed on the UI
### How to reproduce
1. Trigger the below DAG:
```python
@asset(schedule=None)
def abc():
pass
```
2. Notice an asset is created on thr asset page by name 'abc'
3. Update the method name from abc to abcde and trigger the Dag again.
4. See 2 assets 'abc' and 'abcde' on assets page on UI.
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
|
closed
|
2025-03-15T10:26:50Z
|
2025-03-19T18:23:34Z
|
https://github.com/apache/airflow/issues/47810
|
[
"kind:bug",
"priority:high",
"area:core",
"area:UI",
"area:datasets",
"area:task-execution-interface-aip72",
"area:task-sdk",
"affected_version:3.0.0beta"
] |
atul-astronomer
| 6
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
computer-vision
| 1,490
|
CUDA version
|
I followed readme to install the environment but come across this issue "NVIDIA GeForce RTX 3080 Laptop GPU with CUDA capability sm_86 is not compatible with the current PyTorch installation.The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37."
I changed cuda version to 10.2,11.0,11.2,but all don't work well,does anyone know how to fix it?
|
open
|
2022-10-06T14:23:59Z
|
2024-03-12T03:11:53Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1490
|
[] |
AriaZiyanYang
| 2
|
nteract/papermill
|
jupyter
| 568
|
Papermill raises error for id field when running jupyter-notebook 6.2 with the -p flag
|
When running papermill v2.2.1 in an environment where jupyter-notebook v6.2 is installed I am seeing the following error when running with parameters (-p flag)
papermill notebook.ipynb outputs/notebook.ipynb -p my_config --no-progress-bar --log-output
Output:
```
[NbConvertApp] ERROR | Notebook JSON is invalid: Additional properties are not allowed ('id' was unexpected)
[2021-01-18T09:44:24.218Z] Failed validating 'additionalProperties' in code_cell:
[2021-01-18T09:44:24.218Z] On instance['cells'][0]:
[2021-01-18T09:44:24.218Z] {'cell_type': 'code',
[2021-01-18T09:44:24.218Z] 'execution_count': 1,
[2021-01-18T09:44:24.218Z] 'id': 'nasty-bearing',
[2021-01-18T09:44:24.218Z] 'metadata': {'execution': {'iopub.execute_input': '2021-01-18T09:44:22.903942Z',
[2021-01-18T09:44:24.218Z] 'iopub.status.busy': '2021-01-18T09:44:22.903349Z',
[2021-01-18T09:44:24.218Z] 'iopub.status.idle': '2021-01-18T09:44:22.905999Z',
[2021-01-18T09:44:24.218Z] 'shell.execute_reply': '2021-01-18T09:44:22.905474Z'},
[2021-01-18T09:44:24.218Z] 'papermill': {'duration': 0.01294,
[2021-01-18T09:44:24.218Z] 'end_time': '2021-01-18T09:44:22.906187',
[2021-01-18T09:44:24.218Z] 'exception': False,
[2021-01-18T09:44:24.218Z] 'start_time': '2021-01-18T09:44:22.893247',
[2021-01-18T09:44:24.218Z] 'status': 'completed'},
[2021-01-18T09:44:24.218Z] 'tags': ['injected-parameters']}
```
I think this may be due to a change in jupyter-notebook v6.2 that has added an "id" field to the cell properties : https://github.com/jupyter/notebook/pull/5928/ .
The error does not occur when jupyter-notebook v6.1.6 or earlier is running
|
closed
|
2021-01-18T10:30:14Z
|
2022-01-31T15:41:20Z
|
https://github.com/nteract/papermill/issues/568
|
[] |
malcolmbovey
| 16
|
ageitgey/face_recognition
|
machine-learning
| 730
|
about clf file
|
* face_recognition version:
* Python version: 3.4
* Operating System: ubuntu 16.04
### Description
since .clf file is binary format, after recognising a face, i just want to extract its featuresof recognised face from clf file. so that one of requirement will get solved.
please suggest me in this, whether it's having some methods or or any other alternate.
### What I Did
its not an issue i just want to upgrade to next level of coding in it.
|
open
|
2019-01-29T10:46:54Z
|
2019-01-29T10:46:54Z
|
https://github.com/ageitgey/face_recognition/issues/730
|
[] |
saideepthik
| 0
|
apachecn/ailearning
|
python
| 582
|
机器学习
|
NULL
|
closed
|
2020-04-09T04:03:19Z
|
2020-04-09T04:07:10Z
|
https://github.com/apachecn/ailearning/issues/582
|
[] |
dabaozizhang
| 0
|
iMerica/dj-rest-auth
|
rest-api
| 301
|
Auto refresh
|
Currently my rest api has a middleware where I create a cookie with the expiration date every time I login or refresh.
```python
from django.utils import timezone
from django.conf import settings
from rest_framework_simplejwt.settings import api_settings as jwt_settings
class CustomMiddleware(object):
def __init__(self, get_response):
"""
One-time configuration and initialisation.
"""
self.get_response = get_response
def __call__(self, request):
"""
Code to be executed for each request before the view (and later
middleware) are called.
"""
response = self.get_response(request)
loginPath = '/dj-rest-auth/login/'
refreshPath = '/dj-rest-auth/token/refresh/'
if request.path == loginPath or request.path == refreshPath:
access_token_expiration = (timezone.now() + jwt_settings.ACCESS_TOKEN_LIFETIME)
refresh_token_expiration = (timezone.now() + jwt_settings.REFRESH_TOKEN_LIFETIME)
cookie_secure = getattr(settings, 'JWT_AUTH_SECURE', False)
cookie_samesite = getattr(settings, 'JWT_AUTH_SAMESITE', 'Lax')
refresh_cookie_path = getattr(settings, 'JWT_AUTH_REFRESH_COOKIE_PATH', '/')
response.set_cookie(
settings.JWT_TOKEN_EXPIRATION_COOKIE,
access_token_expiration.strftime('%Y-%m-%dT%H:%M:%SZ'),
expires=refresh_token_expiration,
secure=cookie_secure,
httponly=False,
samesite=cookie_samesite,
path=refresh_cookie_path,
)
return response
def process_view(self, request, view_func, view_args, view_kwargs):
# ...
```
So, in the react application I use an axios interceptor, to check the expiration date and if it expired, it refreshes before any other request.
```javascript
instance.interceptors.request.use(async (config) => {
await refreshToken();
return config;
}, (error) => {
return Promise.reject(error);
})
const refreshToken = async () => {
const exp = Cookies.get(process.env.REACT_APP_TOKEN_EXPIRATION_COOKIE);
if (exp !== undefined && new Date() >= new Date(exp)) {
await axios.post('/dj-rest-auth/token/refresh/')
.catch(error => window.location.href = '/login')
}
}
// ...
```
This doesn't seem like a good practice to me, I would like to refresh inside the middleware.
I've even tried to generate the token via simplejwt in the middleware and then use the "set_jwt_cookies" method, however, error 401 occurs in the first request.
```python
import re
from django.utils import timezone, dateparse
from django.conf import settings
from rest_framework import serializers
from rest_framework_simplejwt.settings import api_settings as jwt_settings
from rest_framework_simplejwt.serializers import TokenRefreshSerializer
from dj_rest_auth import jwt_auth
# ...
def __call__(self, request):
"""
Code to be executed for each request before the view (and later
middleware) are called.
"""
refresh_cookie_name = getattr(settings, 'JWT_AUTH_REFRESH_COOKIE', None)
expiration_cookie_name = getattr(settings, 'JWT_TOKEN_EXPIRATION_COOKIE', None)
refresh_cookie_value = request.COOKIES.get(refresh_cookie_name)
expiration_cookie_value = request.COOKIES.get(expiration_cookie_name)
if refresh_cookie_value and expiration_cookie_value:
exp = dateparse.parse_datetime(expiration_cookie_value)
if timezone.now() >= exp:
serializer = TokenRefreshSerializer(data={'refresh': refresh_cookie_value})
if serializer.is_valid():
response = self.get_response(request)
jwt_auth.set_jwt_cookies(
response,
serializer.validated_data['access'],
serializer.validated_data['refresh']
)
response = self.get_response(request)
# ...
```
Is an auto refresh mechanism possible? I mean, refresh to the backend?
|
open
|
2021-08-09T13:16:39Z
|
2021-08-09T13:23:13Z
|
https://github.com/iMerica/dj-rest-auth/issues/301
|
[] |
lcsjunior
| 0
|
WZMIAOMIAO/deep-learning-for-image-processing
|
pytorch
| 292
|
用了tf.function修饰器就出现了警告
|
WARNING:tensorflow:AutoGraph could not transform <bound method MyModel.call of <model.MyModel object at 0x00000172D60EA898>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: No module named 'tensorflow_core.estimator'
警告如上,重新安装tensorflow-estimator也没用,好像是转换图结构有问题,如果不用修饰器,就不会出现警告,请问怎么解决了?大佬
|
closed
|
2021-06-05T16:05:53Z
|
2021-06-07T05:35:38Z
|
https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/292
|
[] |
dianjxu
| 2
|
developmentseed/lonboard
|
data-visualization
| 742
|
[BUG] Panel tutorial is out of date with geopandas 1.X
|
## Context
I was attempting to run this example: https://developmentseed.org/lonboard/latest/ecosystem/panel/#tutorial
## Resulting behaviour, error message or logs
The dataset referenced in the tutorial is no longer available inside of geopandas
```
raise AttributeError(error_msg)
AttributeError: The geopandas.dataset has been deprecated and was removed in GeoPandas 1.0. You can get the original 'naturalearth_cities' data from https://www.naturalearthdata.com/downloads/110m-cultural-vectors/.
```
## Environment
- OS: Ubuntu
- Browser: Firefox
- Lonboard Version: 0.10.3
- Geopandas Version: 1.0.1
-
## Steps to reproduce the bug
Literally just follow the tutorial.
Probably should update the tutorial with version dependencies or new instructions to get the dataset.
|
closed
|
2025-02-04T23:25:52Z
|
2025-02-06T23:10:29Z
|
https://github.com/developmentseed/lonboard/issues/742
|
[
"bug"
] |
j-carson
| 6
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.