repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
darrenburns/posting
|
rest-api
| 136
|
Body text editor doesn't update to reflect content-type
|
If I select an `application/json` request, the body editor should update to reflect that.
|
closed
|
2024-11-16T17:27:17Z
|
2025-03-02T18:09:33Z
|
https://github.com/darrenburns/posting/issues/136
|
[
"bug"
] |
darrenburns
| 0
|
gradio-app/gradio
|
machine-learning
| 9,876
|
Parameter passing of button.click()
|
### Describe the bug
When using `button.click(fn=..., inputs=[],...)`, if the input parameter is a button component, the type of the component will change after being passed to the fn target function.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
def setup_feedback_buttons(like_btn: gr.Button, dislike_btn: gr.Button):
print(f"Before setting visible: like_btn.visible = {like_btn.visible}, dislike_btn.visible = {dislike_btn.visible}")
like_btn.visible = True
dislike_btn.visible = True
with gr.Blocks() as demo:
like_btn = gr.Button("Like", visible=False)
dislike_btn = gr.Button("Dislike", visible=False)
submit_btn = gr.Button("Submit")
submit_btn.click(fn=setup_feedback_buttons, inputs=[like_btn, dislike_btn], outputs=None)
demo.launch(debug=True)
```
### Screenshot
_No response_
### Logs
```shell
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/gradio/queueing.py", line 536, in process_events
response = await route_utils.call_process_api(
File "/usr/local/lib/python3.10/dist-packages/gradio/route_utils.py", line 322, in call_process_api
output = await app.get_blocks().process_api(
File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1935, in process_api
result = await self.call_function(
File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1520, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "/usr/local/lib/python3.10/dist-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
File "/usr/local/lib/python3.10/dist-packages/gradio/utils.py", line 826, in wrapper
response = f(*args, **kwargs)
File "<ipython-input-4-ce9f8f625052>", line 4, in setup_feedback_buttons
print(f"Before setting visible: like_btn.visible = {like_btn.visible}, dislike_btn.visible = {dislike_btn.visible}")
AttributeError: 'str' object has no attribute 'visible'
```
### System Info
```shell
Package Version
------------------ -----------
aiofiles 23.2.1
annotated-types 0.7.0
anyio 4.6.2.post1
Brotli 1.1.0
certifi 2024.8.30
cffi 1.17.1
charset-normalizer 3.4.0
click 8.1.7
colorama 0.4.6
contourpy 1.3.0
cycler 0.12.1
dnspython 2.7.0
email_validator 2.2.0
exceptiongroup 1.2.2
fastapi 0.115.4
fastapi-cli 0.0.5
ffmpy 0.3.0
filelock 3.16.1
fonttools 4.54.1
fsspec 2024.10.0
gradio 5.1.0
gradio_client 1.4.0
h11 0.14.0
h2 4.1.0
hpack 4.0.0
httpcore 1.0.6
httptools 0.6.1
httpx 0.27.2
huggingface_hub 0.26.2
hyperframe 6.0.1
idna 3.10
Jinja2 3.1.4
kiwisolver 1.4.7
markdown-it-py 3.0.0
MarkupSafe 2.1.5
matplotlib 3.9.2
mdurl 0.1.2
numpy 2.1.2
orjson 3.10.10
packaging 24.1
pandas 2.2.3
pillow 10.2.0
pip 24.3.1
pycparser 2.22
pydantic 2.0.3
pydantic_core 2.3.0
pydub 0.25.1
Pygments 2.18.0
pyparsing 3.2.0
PySocks 1.7.1
python-dateutil 2.9.0
python-dotenv 1.0.1
python-multipart 0.0.16
pytz 2024.1
PyYAML 6.0.2
requests 2.32.3
rich 13.9.3
ruff 0.7.1
semantic-version 2.10.0
setuptools 75.1.0
shellingham 1.5.4
six 1.16.0
sniffio 1.3.1
starlette 0.41.2
tomlkit 0.12.0
tqdm 4.66.6
typer 0.12.5
typer-slim 0.12.5
typing_extensions 4.12.2
tzdata 2024.2
unicodedata2 15.1.0
urllib3 2.2.3
uvicorn 0.32.0
uvloop 0.21.0
watchfiles 0.24.0
websockets 12.0
wheel 0.44.0
zstandard 0.23.0
I tried using multiple different versions of python(3.10,3.11,3.12,3.13) and gradio(4.44,4.44.1,5.4) on Ubuntu 18.04 and Ubuntu 22.04, and all got the same error
```
### Severity
I can work around it
|
closed
|
2024-10-31T08:29:56Z
|
2024-11-01T01:43:12Z
|
https://github.com/gradio-app/gradio/issues/9876
|
[
"bug"
] |
Semper4u
| 3
|
nteract/papermill
|
jupyter
| 102
|
Release 0.12.0
|
I could really use some of the recent changes in my current tasks. Any objections or particular PRs people want in this release? Was going to maybe wait for #100.
|
closed
|
2018-01-15T22:48:30Z
|
2018-01-16T10:37:11Z
|
https://github.com/nteract/papermill/issues/102
|
[] |
MSeal
| 5
|
keras-team/keras
|
tensorflow
| 20,395
|
Loss documentation is wrong? Loss function actually returns means over a batch
|
https://keras.io/api/losses/#standalone-usage-of-losses
It states: `By default, loss functions return one scalar loss value per input sample, e.g.`
But the example is passing 4 samples `ops.ones((2, 2,))` , and returning 2 values `<Array: shape=(2,), dtype=float32, numpy=array([1., 1.], dtype=float32)>`
So which is it?
|
closed
|
2024-10-22T16:27:13Z
|
2024-12-15T16:54:05Z
|
https://github.com/keras-team/keras/issues/20395
|
[
"type:docs-bug"
] |
worthy7
| 2
|
sqlalchemy/sqlalchemy
|
sqlalchemy
| 10,103
|
Connection error on Google Cloud SQL with 2.0.18
|
### Describe the bug
SqlAlchemy 2.0.18 fails to parse google cloud database urls, since they contain multiple `:` characters in it.
The issue impacts version `2.0.18`, all is good with `2.0.16` ( I haven't tried `2.0.17`)
See the repro for more details.
### Optional link from https://docs.sqlalchemy.org which documents the behavior that is expected
_No response_
### SQLAlchemy Version in Use
2.0.18
### DBAPI (i.e. the database driver)
postgresql+psycopg2
### Database Vendor and Major Version
PostgreSQL 14
### Python Version
3.11
### Operating system
Linux
### To Reproduce
this is a typical connection string for a google cloud SQL connection
"postgresql+psycopg2://<user>:<pass>@/mydb?host=/cloudsql/projectid:region:dbname"
```python
url = "postgresql+psycopg2://myuser:mypass@/mydatabase?host=/cloudsql/myproject:australia-southeast1:database-connection-name'
import sqlalchemy as sa
sa.create_engine(url=url)
```
when issuing that command the below error happens
### Error
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<string>", line 2, in create_engine
File "/home/user/.cache/pypoetry/virtualenvs/api-RCNUpdqz-py3.11/lib/python3.11/site-packages/sqlalchemy/util/deprecations.py", line 281, in warned
return fn(*args, **kwargs) # type: ignore[no-any-return]
^^^^^^^^^^^^^^^^^^^
File "/home/stefanotabacco/.cache/pypoetry/virtualenvs/api-RCNUpdqz-py3.11/lib/python3.11/site-packages/sqlalchemy/engine/create.py", line 617, in create_engine
(cargs_tup, cparams) = dialect.create_connect_args(u)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.cache/pypoetry/virtualenvs/api-RCNUpdqz-py3.11/lib/python3.11/site-packages/sqlalchemy/dialects/postgresql/_psycopg_common.py", line 134, in create_connect_args
multihosts, multiports = self._split_multihost_from_url(url)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.cache/pypoetry/virtualenvs/api-RCNUpdqz-py3.11/lib/python3.11/site-packages/sqlalchemy/dialects/postgresql/base.py", line 3124, in _split_multihost_from_url
h, p = hosts[0].split(":")
^^^^
ValueError: too many values to unpack (expected 2)
```
### Additional context
I suspect this is stopping anyone using google cloud SQL to connect to the database via the url string.
Moreover, if connecting via google cloud run or similar services, the gunicorn workers dies with a pretty unclear `Service unavailable` message. It took me a full day to figure out the problem. I have pinned sqlalchemy to `2.0.16` and it's all working again.
I'm not sure about other cloud providers, but it might affect those as well, depending if they have multiple `:` in their connection urls.
|
closed
|
2023-07-14T00:35:00Z
|
2023-07-14T02:52:41Z
|
https://github.com/sqlalchemy/sqlalchemy/issues/10103
|
[
"duplicate",
"postgresql"
] |
stabacco
| 1
|
mitmproxy/mitmproxy
|
python
| 7,117
|
HTTP3 over local mode: Unable to access certain websites using firefox via HTTP3
|
#### Problem Description
Firefox fails to access `cloudflare-quic.com` over HTTP3 while using mitmproxy in local mode. Other websites such as `http3.is` load up normally using HTTP3
refs #7025
#### Steps to reproduce the behavior:
1. Start local mode: `mitmdump --mode local:firefox --set experimental_transparent_http3=true`
2. Start firefox and access `http3.is` to ensure HTTP3 works
3. Accessing `cloudflare-quic.com` does not work over HTTP3
Error raised: `TLS Error: [('SSL routines', '', '')]`
`Server QUIC handshake failed. unknown error (0x128)`
`Client QUIC handshake failed. unknown error (0x179)`
#### System Information
Mitmproxy: 11.0.0.dev (+18, commit 6bb536e)
Python: 3.11.3
OpenSSL: OpenSSL 3.2.2 4 Jun 2024
Platform: Windows-10-10.0.22631-SP0
|
open
|
2024-08-18T16:44:49Z
|
2024-08-29T12:11:25Z
|
https://github.com/mitmproxy/mitmproxy/issues/7117
|
[
"kind/triage"
] |
errorxyz
| 3
|
seleniumbase/SeleniumBase
|
pytest
| 2,291
|
Twitter scraping is detected - 429 (Too Many Requests)
|
Hello everybody,
I'm writing a bot to continuously scrape a Twitter account to retrieve the most recent tweets.
After 25 refresh requests, Twitter's response is as follows: 429 (Too Many Requests).
Below is the code and the response header by Twitter:
```python
import traceback
from seleniumbase import SB
import random
def manageException(sb, profile_url):
print("Something went wrong in Chromedriver library.")
print(traceback.format_exc())
sb.get_new_driver(undetectable=True)
sb.driver.uc_open_with_reconnect(profile_url, reconnect_time=3)
sb.sleep(1.2)
return sb.driver.get_text("div[data-testid='tweetText']")
print("Logging to Twitter...")
with SB(uc=True) as sb:
sb.driver.uc_open("https://www.twitter.com")
# do the login in Twitter
account = "daniesilvestri" # account with no blue tick
profile_url = f'https://twitter.com/{account}'
sb.driver.uc_open(profile_url)
last_tweet = ""
try:
last_tweet = sb.driver.get_text("div[data-testid='tweetText']")
except Exception as err:
last_tweet = manageException(sb, profile_url)
print(last_tweet)
# do some stuff with the tweet
sb.sleep(random.randint(800, 2100) / 1000.0)
while 1:
sb.driver.refresh()
new_tweet = ""
try:
new_tweet = sb.driver.get_text("div[data-testid='tweetText']")
except Exception as err:
new_tweet = manageException(sb, profile_url)
print(new_tweet)
# do some stuff
sb.sleep(random.randint(800, 2100) / 1000.0)
```
The response header:
<img width="277" alt="Immagine 2023-11-16 122338" src="https://github.com/seleniumbase/SeleniumBase/assets/3901806/c19a31e9-fbe7-4ad2-a7c1-75ce6561f7db">
As you can see the parameter "X-Rate-Limit-Remaining" is 0. That means that it is no longer possible to make requests like this one.
The bot is limited for approximately 12 minutes (the exact time at which it will be possible to refresh the page again is represented by the "X-Rate-Limit-Reset" header).
Documentation on the parameters can be found here: [https://developer.twitter.com/en/docs/twitter-api/rate-limits](url)
Is there a way to bypass the limits imposed by Twitter? For example, I tried logging out and logging in again but it doesn't work.
Thanks in advance
|
closed
|
2023-11-16T13:26:27Z
|
2023-11-16T14:48:35Z
|
https://github.com/seleniumbase/SeleniumBase/issues/2291
|
[
"question",
"UC Mode / CDP Mode"
] |
fashionprivate
| 1
|
holoviz/panel
|
matplotlib
| 7,430
|
JSComponent with DataFrame almost impossible to implement
|
I'm on Panel 1.5.2
If one of the arguments to `.matches` below is a dataframe

then you get
```bash
ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
```
`.matches` can only handle pandas.Series. Not pandas.DataFrame.
I tried changing precedence to -1 of the DataFrame parameter because I don't need it on the client side. But then `model.lookup(attr)` fails with `ValueError`.
## Minimum, Reproducible Example
When you click the button it errors
```python
import panel as pn
import param
from panel.custom import JSComponent
import pandas as pd
pn.extension()
class MyCustomComponent(JSComponent):
value = param.DataFrame()
_esm = """
export function render({ model }) {
console.log(model)
}
"""
custom_component = MyCustomComponent(value=pd.DataFrame())
button = pn.widgets.Button(name="update")
@pn.depends(button, watch=True)
def _update_counter_button(value):
custom_component.value = pd.DataFrame({"1": [2]})
pn.Column(button, custom_component).servable()
```
```bash
File "/home/jovyan/repos/mt-pm-reporting/.venv/lib/python3.11/site-packages/panel/reactive.py", line 331, in _apply_update
self._update_model(events, msg, root, model, doc, comm)
File "/home/jovyan/repos/mt-pm-reporting/.venv/lib/python3.11/site-packages/panel/custom.py", line 467, in _update_model
self._set_on_model(data_msg, root, model.data)
File "/home/jovyan/repos/mt-pm-reporting/.venv/lib/python3.11/site-packages/panel/reactive.py", line 1586, in _set_on_model
self._changing[root.ref['id']] = [
^
File "/home/jovyan/repos/mt-pm-reporting/.venv/lib/python3.11/site-packages/panel/reactive.py", line 1586, in <listcomp>
self._changing[root.ref['id']] = [
^
File "/home/jovyan/repos/mt-pm-reporting/.venv/lib/python3.11/site-packages/pandas/core/generic.py", line 1577, in __nonzero__
raise ValueError(
ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
```
[Try on PY.CAFE](https://py.cafe/snippet/panel/v1#c=H4sIAM6IF2cAA41SwYrbMBD9FaGTA16z2bItGAJl05ZS6KUt7SEOQbEmsUCWVElO1hj_e0dSnKa7XbL2wZ6ZN--9kWagteZASypao60nhimQhDliVKXOOcvaSu2sblO9qDvnMTjVv3xfavxToPxFj-LIEoh4hVRGFfDoQTmhVTYLmUrVkjlHvvbLyHbmyC74ZmWlCD4HJjsgi2Sl-MA8-4Q_kJgCYAOuxXpF4xtT8Bid7DpVe1QlFhQHmw2kxYklGWdkSEBCaq2cllBIvc9iFXlDfkyfiTWajmY39eQQRZ9PEO0uDL90ipTbzns0gmOo4ij4HrwrHmIuU4hZVLQznHmo6OmE3iOQg0HjLkvNOTkyXzeLH7YDBHHYkU1qQkud8mA3CZg8TOf31HVxPtBLj0NF5xUtyepuPZ4coIGlll2rzvpPqWaFA3tgWxlvg-bUwu9OWGix5nCx4sbEgu9N2LSYwJAZ81PAkZY7Jh3kFLjwH1UgoqXH8XJqet9oFVp6zQWHm8Ntcfe2mGOzZL3uPC0HegAbloqWdyittf-mkXKYxCxGOa0bITnePy1X54pnWwcei3gTvqHl_P42p61Qv1L4JkWfQewb1Amh4Ni2ExIekBWHXmrlmVBgX1AI0JttwiLEsMBL6bge8-cuXrD4tw-PqzD9v92vmO76RCeH16aZBvm__ebEOn93_xpN3FNMM3lNdMIF1fCOeTxV3KvVevwD7XffAbwEAAA)
## Does not work if precedence=-1 either
```python
import panel as pn
import param
from panel.custom import JSComponent
import pandas as pd
pn.extension()
class MyCustomComponent(JSComponent):
value = param.DataFrame(precedence=-1)
_esm = """
export function render({ model }) {
console.log(model)
}
"""
custom_component = MyCustomComponent(value=pd.DataFrame())
button = pn.widgets.Button(name="update")
@pn.depends(button, watch=True)
def _update_counter_button(value):
custom_component.value = pd.DataFrame({"1": [2]})
pn.Column(button, custom_component).servable()
```
```bash
File "/home/jovyan/repos/mt-pm-reporting/.venv/lib/python3.11/site-packages/panel/reactive.py", line 331, in _apply_update
self._update_model(events, msg, root, model, doc, comm)
File "/home/jovyan/repos/mt-pm-reporting/.venv/lib/python3.11/site-packages/panel/custom.py", line 467, in _update_model
self._set_on_model(data_msg, root, model.data)
File "/home/jovyan/repos/mt-pm-reporting/.venv/lib/python3.11/site-packages/panel/reactive.py", line 1586, in _set_on_model
self._changing[root.ref['id']] = [
^
File "/home/jovyan/repos/mt-pm-reporting/.venv/lib/python3.11/site-packages/panel/reactive.py", line 1588, in <listcomp>
if not model.lookup(attr).property.matches(getattr(model, attr), value)
^^^^^^^^^^^^^^^^^^
File "/home/jovyan/repos/mt-pm-reporting/.venv/lib/python3.11/site-packages/bokeh/core/has_props.py", line 503, in lookup
raise AttributeError(f"{cls.__name__}.{name} property descriptor does not exist")
AttributeError: MyCustomComponent1.value property descriptor does not exist
```
[Try on PY.CAFE](https://py.cafe/snippet/panel/v1#c=H4sIADOJF2cAA41SwYrbMBD9FaGTA47ZpGwLhkDZtKUUemlLe4hDUKxJIpBHqiQnG4L_vSM5TtPdLtn4EM_MmzfvjefEayOBl1w11rjArEDQTHhmscJLzommwo0zTV8v6tYHCs71L9_nht4QMFz1oCSWSCQrorJYwGMA9MpgNoqZCmstvGdfj_PEduHIrvhGZYWMfnuhW2CzXkrxQQTxiV4gsw5qkIA1zMaTRBvRK_ANgSuenpSCxyRr02IdSAJzgBJcdmIN2desG7FTD2SsNuiNhkKbbZaqxBvzXf83sCYHSfmqHuTS0Od2kvaZlVeyR0S5bkMgIeQJi4OSWwi-eEi5DAkzq3hrpQhQ8fO63hNQgiXhPuubc3YQod7NfrgWCCRhw1Z9E0lqMYBb9cBew7DMp6qLy3avNZ4qPql4yRbTZXdWQALmRrcNXuY_pRoVHtxerDV5rJDn3MHvVjloqObpytL5pEI42nh2KUGhsPanggMvN0J7yDlIFT5iJOJlIHs5t8ewMxhbjkYqCeP9XTF9W0yoWYujaQMvT3wPLl4YL6c02pjwzRDlaRjmKMp5vVNa0vfn5eJSCWLtIVCRvkTY8XJyf5fzRuGvPnzTR59BbXc0J4ZKUttGaXggVjI9NxiEQnAvTIjQ8brHEsSKyMt5t-zy5ypekPi3j9ZV2OO_3a9wd9vRWeEtN4OR_8vfnVkn7-5fM5PulNJC3xo64OLU-HR52ird1WLZ_QE979oCyQQAAA)
|
closed
|
2024-10-22T11:01:08Z
|
2024-10-29T17:01:06Z
|
https://github.com/holoviz/panel/issues/7430
|
[] |
MarcSkovMadsen
| 3
|
proplot-dev/proplot
|
matplotlib
| 330
|
Remove this
|
Hi,
Thank you for your amazing work.
I think the first example on page https://proplot.readthedocs.io/en/stable/basics.html should use
`pplt.subplots()` instead of `pplt.subplot()`, right?
There is figure.subplot() but no pplt.subplot().
EDIT: I am blind. Everything alright in docs.
|
closed
|
2022-01-28T12:13:54Z
|
2022-01-29T17:27:57Z
|
https://github.com/proplot-dev/proplot/issues/330
|
[
"documentation"
] |
lkugler
| 4
|
RobertCraigie/prisma-client-py
|
asyncio
| 118
|
Fields using the `Bytes` type cannot be serialised to json
|
<!--
Thanks for helping us improve Prisma Client Python! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by enabling additional logging output.
See https://prisma-client-py.readthedocs.io/en/latest/logging/ for how to enable additional logging output.
-->
## Bug description
<!-- A clear and concise description of what the bug is. -->
```py
from prisma.models import Types
from prisma.fields import Base64
model = await Types.prisma().create(
data={
'bytes': Base64.encode(b'foo'),
},
)
print(model.json(indent=2))
```
```
File ".venv/lib/python3.9/site-packages/pydantic/json.py", line 95, in pydantic_encoder
raise TypeError(f"Object of type '{obj.__class__.__name__}' is not JSON serializable")
TypeError: Object of type 'Base64' is not JSON serializable
```
|
closed
|
2021-11-13T21:04:43Z
|
2021-11-13T21:20:54Z
|
https://github.com/RobertCraigie/prisma-client-py/issues/118
|
[
"bug/2-confirmed",
"kind/bug"
] |
RobertCraigie
| 0
|
supabase/supabase-py
|
flask
| 699
|
increasing the storage list limit beyond 100
|
Rather than doing something like this:
```
# Function to list all files with pagination
def list_all_files(storage, bucket_name):
offset = 0
limit = 100 # The maximum number of files to retrieve per request
all_files = []
while True:
response = storage.from_(bucket_name).list(limit=limit, offset=offset)
files = response.data
if files:
all_files.extend(files)
offset += limit
else:
break
return all_files
# List all files in the bucket
all_files = list_all_files(supabase.storage, bucket_name)
print(f"Total files: {len(all_files)}")
```
I prefer to get a list of all file names in a bucket like
```
url: str = os.getenv("SUPABASE_URL")
#key: str = os.getenv("SUPABASE_SERVICE_ROLE_KEY")
key: str = os.getenv("SUPABASE_ANON_KEY")
supabase: Client = create_client(url, key)
SUPABASE_EMAIL = os.environ.get("SUPABASE_EMAIL")
SUPABASE_PASSWORD = os.environ.get("SUPABASE_PASSWORD")
# print secrets
print(SUPABASE_EMAIL, SUPABASE_PASSWORD)
my_session = supabase.auth.sign_in_with_password({"email": SUPABASE_EMAIL, "password":SUPABASE_PASSWORD})
#
# Your storage bucket name
bucket_name = "scratch-assignments"
#%%
# List files in the bucket
files = supabase.storage.from_("scratch-assignments").list()
#%%
print(len(files)) #prints 100 but there are 104 files in the storage
#%%
```
That's a list of dictionaries but the contents are not that large so I'm not sure why it's set to such a low limit. Can you parameterize it like
```
files = supabase.storage.from_("scratch-assignments", max_ret=1000)
```
|
closed
|
2024-02-22T09:34:29Z
|
2024-06-25T11:13:57Z
|
https://github.com/supabase/supabase-py/issues/699
|
[] |
nyck33
| 1
|
wkentaro/labelme
|
computer-vision
| 1,427
|
JSON file integration
|
Why does the latest version of labelme's annotated JSON file not appear in the same JSON file?
|
open
|
2024-04-14T02:12:06Z
|
2024-04-14T02:12:06Z
|
https://github.com/wkentaro/labelme/issues/1427
|
[] |
xhlho
| 0
|
miguelgrinberg/python-socketio
|
asyncio
| 821
|
Client background thread doesn't stop after Timeout
|
If I am right there are some background threads/tasks to handle incoming message. While I am calling :
```python
try:
v = sio.call('get_v, 'data', timeout=5)
except socketio.exceptions.TimeoutError as e:
print("error")
finally:
sio.disconnect()
#main thread exit here, but the python process doesn't
```
Assume that the sever will return after 10s, I will have to wait for 10s **with or without** `sio.disconnect()` for the python process to end. It seems some background threads are preventing the process from terminating. In case of timeout, why does it have to wait? Or is there anyway to abort those non-daemon threads?
I'm running the latest client on a windows PC.
|
closed
|
2021-11-25T05:34:50Z
|
2022-04-29T23:42:36Z
|
https://github.com/miguelgrinberg/python-socketio/issues/821
|
[
"enhancement"
] |
goyzhang
| 6
|
apache/airflow
|
automation
| 47,800
|
Rename the triggerer's `default_capacity` config to just `capacity`
|
### Apache Airflow version
main (development)
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
The [default_capacity config](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#default-capacity) doesn't really make sense. Once you change the value, it's no longer the default value, but it's still called `default_capacity`.
### What you think should happen instead?
The config name should just be `capacity`
### How to reproduce
n/a
### Operating System
n/a
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
|
closed
|
2025-03-14T20:34:32Z
|
2025-03-22T04:12:42Z
|
https://github.com/apache/airflow/issues/47800
|
[
"kind:bug",
"affected_version:main_branch",
"area:Triggerer",
"affected_version:3.0.0beta"
] |
RNHTTR
| 1
|
PrefectHQ/prefect
|
data-science
| 16,955
|
Logs missing in parent flow when running subflows in different processes
|
### Bug summary
Original slack message: https://prefect-community.slack.com/archives/CL09KU1K7/p1738591882063619
Thanks to the latest release (3.1.15), I'm trying to make use of the new utility function `run_flow_in_subprocess` to create and run subflows with multiprocessing. Here is a simplified example of what i'm trying to do:
```py
import asyncio
import multiprocessing
from prefect import flow, task
from prefect.flow_engine import run_flow_in_subprocess
@task
async def long_running_task(sleep: int):
await asyncio.sleep(sleep)
@flow
async def my_flow(items: list[int]):
return await asyncio.gather(*[long_running_task(i) for i in items])
@flow
async def my_flow_distributed(items: list[int]):
n_procs = multiprocessing.cpu_count()
batch_size = len(items) // n_procs
procs = []
for i in range(0, len(items), batch_size):
proc = run_flow_in_subprocess(flow=my_flow, parameters={"items": items[i : i + batch_size]})
procs.append(proc)
exit_codes = [p.join() for p in procs]
if any(exit_codes):
raise ValueError()
return True
if __name__ == "__main__":
items = list(range(10000))
asyncio.run(my_flow_distributed(items))
```
It works but logs are missing in the parent flow:
* Logs not showing in the parent flow

* Tasks and logs missing in the sub flows


while locally I can see the logs:

### Version info
```Text
Version: 3.1.15
API version: 0.8.4
Python version: 3.12.3
Git commit: 3ac3d548
Built: Thu, Jan 30, 2025 11:31 AM
OS/Arch: linux/x86_64
Profile: production
Server type: server
Pydantic version: 2.10.6
Integrations:
prefect-slack: 0.3.1
prefect-dask: 0.3.2
prefect-aws: 0.5.3
```
### Additional context
_No response_
|
open
|
2025-02-04T10:59:10Z
|
2025-02-17T19:28:58Z
|
https://github.com/PrefectHQ/prefect/issues/16955
|
[
"bug"
] |
obendidi
| 1
|
albumentations-team/albumentations
|
machine-learning
| 1,551
|
[tech debt] Add linting rule to check two arrays addition
|
Add linting rule that checks when two arrays are added with coefficients, we use `cv2.addWeighted` and not `array1 * alpha + array2 * beta`
|
closed
|
2024-02-29T16:13:38Z
|
2024-06-19T03:41:06Z
|
https://github.com/albumentations-team/albumentations/issues/1551
|
[
"Tech debt"
] |
ternaus
| 0
|
explosion/spaCy
|
nlp
| 13,434
|
Spacy problem with whitespace or punctuation
|
Hi everyone !
I have a problem when i train my NER model spacy. I have a annoted span like $400 in a text. But when i train my model i have this error :
```
ValueError: [E024] Could not find an optimal move to supervise the parser. Usually, this means that the model can't be updated in a way that's valid and satisfies the correct annotations specified in the GoldParse. For example, are all labels added to the model? If you're training a named entity recognizer, also make sure that none of your annotated entity spans have leading or trailing whitespace or punctuation. You can also use the `debug data` command to validate your JSON-formatted training data. For details, run:
python -m spacy debug data --help
```
So how can i add $ in spacy caracters?
|
closed
|
2024-04-11T12:49:42Z
|
2024-05-16T00:02:26Z
|
https://github.com/explosion/spaCy/issues/13434
|
[
"usage"
] |
salma2302
| 2
|
desec-io/desec-stack
|
rest-api
| 479
|
Support for Handshake names
|
Hi.
I'm trying to host a zone for a Handshake name on deSEC. From what I can tell, it is possible to add second-level domains (`my.whatevername.`), but not the name itself (`whatevername.`).
It's great that deSEC does not limit to domains with an ICANN tld like other services and hopefully will be able to host records for just the name too.
Adding 2 domains (say `apple.whatevername.` and `orange.whatevername.`) is possible right now as the NS required to be added in the registry is the same (`ns1.desec.io` and `ns2.desec.org`).
The problem is when setting up **DS** records. 2 different domains on deSEC ask different DS records to be added.
So if deSEC allows adding the name itself (`whatevername.` in the example), then a common DS record would work for all "sub" domains.
To be clear, deSEC does not have to interact with Handshake at all, only allow the name to be added, not just SLDs.
What is Handshake?
> Handshake is a UTXO-based blockchain protocol that manages the registration, renewal, and transfer of DNS top-level domains (TLDs). Our naming protocol differs from its predecessors in that it has no concept of namespacing or subdomains at the consensus layer. Its purpose is not to replace DNS, but to replace the root zone file and the root servers.
Official website: https://handshake.org/
Documentation: https://hsd-dev.org/
|
closed
|
2020-11-19T18:16:28Z
|
2020-11-20T12:33:55Z
|
https://github.com/desec-io/desec-stack/issues/479
|
[
"more info needed"
] |
rithvikvibhu
| 10
|
deepspeedai/DeepSpeed
|
machine-learning
| 6,772
|
[BUG] [Fix-Suggested] ZeRO Stage 3 Overwrites Module ID Attribute Causing Incorrect Expert Placement on GPUs
|
## Description
We experienced wrong GPU placement when doing MoE with ZeRO Stage 3. We use `module.id` to control which expert to be loaded onto which GPU for finegrained controlm and we find out that `module.id` got corrupted after `deepspeed.initialize`.
## Suspected Root Cause
DeepSpeed uses `.id` in ZeRO Stage 3 optimization to manage states, as seen in [`runtime/zero/parameter_offload.py:L271`](https://github.com/microsoft/DeepSpeed/blob/master/deepspeed/runtime/zero/parameter_offload.py#L269-L271).
This practice is very brittle in that:
1. `id` is an overly generic attribute name, might get easilly collided with some user-defined attributes.
2. There's no special check on `.id` attribute before setting it, this allows for accidental overwrites of the attribute, causing hard-to-diagnose problems.
In the specific bug we've encountered (bug.py provided below), each expert module is identified by the `.id` attribute, but during initialization, the `.id` is overwritten by the `_register_hooks_recursively` function in `deepspeed/runtime/zero/stage3.py`, leading to a mess on expert-GPU placement.
### To reproduce
The following code in ZeRO Stage 3 is responsible for overwriting the `.id` attribute:
1. Install deepspeed `0.15.4`
2. run `bug.py` using `deepspeed --num_gpus=2 bug.py` (num_gpus argument here doesn't matter, use 1 if you don't have multigpu nodes.)
```python
import torch
import deepspeed
from torch.nn import Module, Linear
# Define a simple expert module
class Expert(Module):
def __init__(self, id):
super().__init__()
self.id = id # ID for custom GPU placement
self.fc = Linear(128, 128)
def forward(self, x):
return self.fc(x)
# Create a model with 60 experts
class MoEModel(Module):
def __init__(self):
super().__init__()
self.experts = torch.nn.ModuleList([Expert(i) for i in range(60)])
def forward(self, x, expert_id):
return self.experts[expert_id](x)
# Helper function to log expert ids
def log_expert_ids(model, rank):
loaded_experts = [e.id for e in model.experts]
def main():
deepspeed.init_distributed()
rank = torch.distributed.get_rank()
# Create model
model = MoEModel()
log_expert_ids(model, rank) # prints 0, 1, 2, .., 59
# Configure DeepSpeed
model_engine, optimizer, _, _ = deepspeed.initialize(
model=model,
optimizer=torch.optim.Adam(model.parameters(), lr=3e-5),
config={
"train_micro_batch_size_per_gpu": 1,
"gradient_accumulation_steps": 1,
"steps_per_print": 1,
"zero_optimization": {"stage": 3,}
}
)
# print model ids again after deepspeed.initialize
log_expert_ids(model, rank) # prints 0, 2, 4, 6, ...
# if you do a deepspeed.intialize here again, you will see the id itself completely messed up.
dummy_input = torch.randn(1, 128).cuda(rank)
for expert_id in range(60):
model_engine(dummy_input, expert_id=expert_id)
if __name__ == "__main__":
main()
```
3. We print `id`s of all experts twice, one before deepspeed.initialize and one after that. Observe that the first print gives `0, 1, 2, ..., 59` while the second one gives `2, 4, 6, 8, .., 120`
In this code, `module.id` is set to a value based on a counter (`my_count`), which conflicts with user-defined `.id` attributes used for expert placement.
## Bug Significance
This bug can significantly affect model behavior when expert modules are incorrectly placed across GPUs, leading to incorrect training outcomes or potential crashes. Ensuring that internal DeepSpeed modifications do not overwrite user-defined attributes is crucial for stability and expected functionality.
Even if user-side conflicts are not in your scope, deepspeed itself can accidently modify these attributes as well. For example, you can reproduce the same problem by calling `deepspeed.initialize` multiple times.
Thus, we argue for two fixes / engineering practices for this issue.
## Expected Behavior / Suggested Fix
1. **Use a Specific Attribute for Internal IDs**: Instead of overwriting `.id`, use a more specific attribute name such as `_deepspeed_id` to avoid conflicts with user-defined attributes.
2. **Restrict Attribute Modification**: Modify the `__setattr__` method to only allow setting fields that have not been previously set, preventing unintentional overwrites of user-defined attributes.
3. **Forbid Duplicated `deepspeed.initialize`: We observe a lot of issue with accidental duplicate calls to `deepspeed.initialize`. Thus we suggest to forbid duplicate calls by recording the models / optimizers that have already been inited, as mentioned in #6770 .
## ds_report output
<details>
<summary>Click to Show</summary>
<br>
<pre><code>collect2: error: ld returned 1 exit status
gds .................... [NO] ....... [NO]
transformer_inference .. [NO] ....... [OKAY]
inference_core_ops ..... [NO] ....... [OKAY]
cutlass_ops ............ [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
ragged_device_ops ...... [NO] ....... [OKAY]
ragged_ops ............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.2
[WARNING] using untested triton version (2.2.0), only 1.0.0 is known to be compatible
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/home/xxx/python3.10/site-packages/torch']
torch version .................... 2.2.2+cu121
deepspeed install path ........... ['/home/xxx/python3.10/site-packages/deepspeed']
deepspeed info ................... 0.15.4, unknown, unknown
torch cuda version ............... 12.1
torch hip version ................ None
nvcc version ..................... 12.3
deepspeed wheel compiled w. ...... torch 2.2, cuda 12.1
shared memory (/dev/shm) size .... 31.24 GB
</code></pre>
</details>
**I will be more than happy to contribute to the two suggested fixes, let me know what you think!**
|
closed
|
2024-11-20T23:05:43Z
|
2025-01-31T18:02:58Z
|
https://github.com/deepspeedai/DeepSpeed/issues/6772
|
[
"bug",
"training"
] |
traincheck-team
| 4
|
yzhao062/pyod
|
data-science
| 89
|
CBLOF not converging expected different data type
|
I have been unable to flag anomalies using this algorithm. I have found that when I run the CBLOF algorithm it throws the following error:
ValueError: Buffer dtype mismatch, expected 'INT' but got 'long long'
Exception ignored in: 'sklearn.cluster._k_means._assign_labels_csr' ValueError: Buffer dtype mismatch, expected 'INT' but got 'long long'
Which results in:
ValueError: Could not form valid cluster separation. Please change n_clusters or change clustering method
It appears that the CBLOF algorithm is dependent on sklearn.cluster and the expected data type that is being passed to skelearn from pyod is not what is expected.
Below are four scenarios that I have prepared using different parameters for CBLOF. Note that the same error is thrown regardless of changing theses parameters.
I have also tried changing the cluster size using the elbow method to find the optimal K in the Kmeans scenario.
from pyod.models.cblof import CBLOF
import pyod.utils as ut
from sklearn import cluster
#create some data
data = ut.data.generate_data()[0]
#scenario 1 - use default CBLOF parameters
model = CBLOF()
clusters = model.fit_predict(data)
#scenario 2 - use kmeans as a centroid estimator
n_clusters = 3
kmeans = cluster.KMeans(n_clusters)
model = CBLOF(n_clusters = n_clusters, clustering_estimator = kmeans)
clusters = model.fit_predict(data)
#test if scaling the data makes a difference
data_scaled = (data - data.min())/(data.max()-data.min())
#scenario 3 - no clusters specified, use defaults, scaled data
model = CBLOF()
clusters = model.fit_predict(data_scaled)
#scenario 4 - use kmeans as a centroid estimator, scaled data
n_clusters = 3
kmeans = cluster.KMeans(n_clusters)
model = CBLOF(n_clusters = n_clusters, clustering_estimator = kmeans)
clusters = model.fit_predict(data_scaled)
|
closed
|
2019-05-09T07:30:25Z
|
2020-05-08T14:15:22Z
|
https://github.com/yzhao062/pyod/issues/89
|
[] |
wbarich
| 7
|
Nekmo/amazon-dash
|
dash
| 73
|
Evaluate to change sniff filters
|
Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)
### What is the purpose of your *issue*?
- [X] Bug report (encountered problems with amazon-dash)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
### Guideline for bug reports
* amazon-dash version: 1.1.1
* Python version: 2.7.9
* Pip & Setuptools version: 18.0
* Operating System: Raspbian (Linux version 4.9.35-v7+)
- [X] The `pip install` or `setup install` command has been completed without errors
- [X] The `python -m amazon_dash.install` command has been completed without errors
- [X] The `amazon-dash discovery` command works without errors
- [X] I have created/edited the configuration file
- [X] *Amazon-dash service* or `amazon-dash --debug run` works
#### Description
It seems amazon-dash service consumes a relevant share of CPU (around 15% average) in stand by. I think this is too much for a service like this, especially if you are using a Raspberry Pi as a server. In the same Raspberry Pi, other services like lighttpd, plexmediaserver, transmission and home assistant do not consume more than 2% in stand by.
#### What I Did
I ran 'top' and 'htop' for a while and observed the results (sorted by cpu usage).
|
closed
|
2018-08-11T12:12:51Z
|
2018-09-03T18:21:32Z
|
https://github.com/Nekmo/amazon-dash/issues/73
|
[] |
etatus
| 6
|
jpadilla/django-rest-framework-jwt
|
django
| 406
|
'ReverseManyToOneDescriptor' object has no attribute 'get_by_natural_key'
|
```ttributeError at /api/v0/lockers/
'ReverseManyToOneDescriptor' object has no attribute 'get_by_natural_key'
Request Method: GET
Request URL: http://172.16.0.89:8000/api/v0/lockers/
Django Version: 1.11.8
Python Executable: /var/webapps/locker_project/env/bin/python
Python Version: 3.5.2
Python Path: ['/var/webapps/locker_project/code', '/var/webapps/locker_project/env/lib/python35.zip', '/var/webapps/locker_project/env/lib/python3.5', '/var/webapps/locker_project/env/lib/python3.5/plat-x86_64-linux-gnu', '/var/webapps/locker_project/env/lib/python3.5/lib-dynload', '/usr/lib/python3.5', '/usr/lib/python3.5/plat-x86_64-linux-gnu', '/var/webapps/locker_project/env/lib/python3.5/site-packages']
Server time: Fri, 15 Dec 2017 12:43:02 +0000
Installed Applications:
['grappelli',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.sites',
'ckeditor',
'django_cleanup',
'imagekit',
'rest_framework',
'rest_framework.authtoken',
'locker_project.accounts',
'locker_project.lockers',
'locker_project.operations']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware']
Traceback:
File "/var/webapps/locker_project/env/lib/python3.5/site-packages/rest_framework/request.py" in __getattribute__
385. return getattr(self._request, attr)
During handling of the above exception ('WSGIRequest' object has no attribute 'successful_authenticator'), another exception occurred:
File "/var/webapps/locker_project/env/lib/python3.5/site-packages/django/core/handlers/exception.py" in inner
41. response = get_response(request)
File "/var/webapps/locker_project/env/lib/python3.5/site-packages/django/core/handlers/base.py" in _get_response
187. response = self.process_exception_by_middleware(e, request)
File "/var/webapps/locker_project/env/lib/python3.5/site-packages/django/core/handlers/base.py" in _get_response
185. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/var/webapps/locker_project/env/lib/python3.5/site-packages/django/views/decorators/csrf.py" in wrapped_view
58. return view_func(*args, **kwargs)
File "/var/webapps/locker_project/env/lib/python3.5/site-packages/django/views/generic/base.py" in view
68. return self.dispatch(request, *args, **kwargs)
File "/var/webapps/locker_project/env/lib/python3.5/site-packages/rest_framework/views.py" in dispatch
489. response = self.handle_exception(exc)
File "/var/webapps/locker_project/env/lib/python3.5/site-packages/rest_framework/views.py" in handle_exception
449. self.raise_uncaught_exception(exc)
File "/var/webapps/locker_project/env/lib/python3.5/site-packages/rest_framework/views.py" in dispatch
477. self.initial(request, *args, **kwargs)
File "/var/webapps/locker_project/env/lib/python3.5/site-packages/rest_framework/views.py" in initial
395. self.check_permissions(request)
File "/var/webapps/locker_project/env/lib/python3.5/site-packages/rest_framework/views.py" in check_permissions
330. request, message=getattr(permission, 'message', None)
File "/var/webapps/locker_project/env/lib/python3.5/site-packages/rest_framework/views.py" in permission_denied
169. if request.authenticators and not request.successful_authenticator:
File "/var/webapps/locker_project/env/lib/python3.5/site-packages/rest_framework/request.py" in __getattribute__
387. six.reraise(info[0], info[1], info[2].tb_next)
File "/var/webapps/locker_project/env/lib/python3.5/site-packages/django/utils/six.py" in reraise
685. raise value.with_traceback(tb)
File "/var/webapps/locker_project/env/lib/python3.5/site-packages/rest_framework/request.py" in successful_authenticator
238. self._authenticate()
File "/var/webapps/locker_project/env/lib/python3.5/site-packages/rest_framework/request.py" in _authenticate
345. user_auth_tuple = authenticator.authenticate(self)
File "/var/webapps/locker_project/env/lib/python3.5/site-packages/rest_framework_jwt/authentication.py" in authenticate
43. user = self.authenticate_credentials(payload)
File "/var/webapps/locker_project/env/lib/python3.5/site-packages/rest_framework_jwt/authentication.py" in authenticate_credentials
59. user = User.objects.get_by_natural_key(username)```
|
closed
|
2017-12-15T12:44:55Z
|
2017-12-15T13:12:57Z
|
https://github.com/jpadilla/django-rest-framework-jwt/issues/406
|
[] |
ArtemBernatskyy
| 1
|
marimo-team/marimo
|
data-science
| 4,161
|
Github integration in the community cloud
|
### Description
I would like to be able to make pull requests from edits to notebooks and files for projects mirrored from github.
### Suggested solution
Add a "Preview Pull Request" button next to `$+/- Compare` on History tab of project in dashboard. This option might only make sense if the project is mirrored from GitHub.
### Alternative
None that I could think of.
### Additional context
_No response_
|
open
|
2025-03-19T15:02:37Z
|
2025-03-19T15:05:00Z
|
https://github.com/marimo-team/marimo/issues/4161
|
[
"enhancement"
] |
dchassin
| 0
|
ebhy/budgetml
|
fastapi
| 9
|
Extra files/scripts in Docker container
|
Hi @htahir1 , thanks for the super handy library !
I am wondering whether or not it is possible to include some extra python file when creating the Docker container?
I am attempting to infer a custom model and thus I need a bunch of files like: checkpoint, model file, config and so on..
I couldn't find anything mentioning this in the docs.
Thanks for your help 😄
|
closed
|
2021-03-02T17:08:07Z
|
2021-03-09T15:24:09Z
|
https://github.com/ebhy/budgetml/issues/9
|
[] |
JulesBelveze
| 4
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
deep-learning
| 825
|
Training works only with gray scale images
|
Hi,
I have used pix2pix model to train my model using custom dataset with pairs. A is the real image while B is the black & white version of A. When i trained the model with 3 channel ie RGB image(--n_channels 3), the model does not learn anything but just outputs a black image. But when i use gray scale images(--n_channels 1), it learns perfectly. Btw, i use resnet9 for G and basic for D. Can you please explain the behaviour?
|
closed
|
2019-11-05T09:13:37Z
|
2019-11-06T19:15:51Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/825
|
[] |
kalai2033
| 4
|
biolab/orange3
|
scikit-learn
| 6,001
|
Recursive imputer
|
Use case: We have a model with 200 features. We apply a “Model base imputer (simple tree)” to fill in missing data.
Problem: The “Imputer” widget fills in just a part of the missing data (that's the limit of the default 1-NN regressor used).
Current workaround: We have chained 5 instances of the same imputer, that’s to say, as much as needed to “complete” the imputing procedure for our data. At each stage, more data are imputed, up until the point where the regressor cannot produce further predictions.
Proposed solution: Add a check-box in the Impute widget, to activate iteration. The process will be repeated leveraging the imputed data from the previous iteration. Loop until no more changes are produced.
|
closed
|
2022-06-02T15:41:36Z
|
2022-09-30T08:49:45Z
|
https://github.com/biolab/orange3/issues/6001
|
[
"bug"
] |
hydrastarmaster
| 5
|
tensorpack/tensorpack
|
tensorflow
| 1,072
|
Some questions for image mean subtraction
|
1. for per_pixel_mean(), it is better to avoid using test_files. It is unfair to use test_files for training datas which apply subtle distribution from test_files, and I am not sure if it will affect final test eval
2. for test, it is better to use image mean retrieved from training images. and it is better to save the image mean in the model for further usage at prediction. wondering is there easy way to save/restore it along with model?
```
def get_per_pixel_mean(self):
"""
Returns:
a mean image of all (train and test) images of size 32x32x3
"""
train_files, test_files, _ = get_filenames(self.dir, self.cifar_classnum)
all_imgs = [x[0] for x in read_cifar(train_files + test_files, self.cifar_classnum)]
arr = np.array(all_imgs, dtype='float32')
mean = np.mean(arr, axis=0)
return mean
```
|
closed
|
2019-02-07T11:42:54Z
|
2019-02-08T04:11:55Z
|
https://github.com/tensorpack/tensorpack/issues/1072
|
[] |
cloudseasail
| 2
|
microsoft/nlp-recipes
|
nlp
| 45
|
Get datasets/packages approved by CELA
|
- BERT PyTorch Repo
- Yahoo Answers
- IMDb Large Movie Review Dataset
|
closed
|
2019-05-07T14:46:25Z
|
2019-08-02T14:14:38Z
|
https://github.com/microsoft/nlp-recipes/issues/45
|
[
"engineering"
] |
nikhilrj
| 3
|
inventree/InvenTree
|
django
| 8,452
|
Email Notification When Stock Level Drops Below Minimum?
|
### Please verify that this bug has NOT been raised before.
- [x] I checked and didn't find a similar issue
### Describe the bug*
I'm not receiving email notifications when a part's stock level drops below the set minimum, even though email functionality is working, and notifications are enabled in the user settings.
Could this issue (https://github.com/inventree/InvenTree/issues/7866) be addressing the same question?
### Steps to Reproduce
1. Set a minimum stock level for a part.
1. Ensure email notifications are enabled in the user settings.
1. Allow the stock level of that part to drop below the minimum.
### Expected behaviour
An email notification should be sent when the stock level falls below the minimum threshold.
### Deployment Method
- [ ] Docker
- [ ] Package
- [x] Bare metal
- [ ] Other - added info in Steps to Reproduce
### Version Information
# Version Information:
InvenTree-Version: 0.16.8
Django Version: 4.2.15
Database: sqlite3
Debug-Mode: False
Deployed using Docker: False
Platform: Linux-6.8.12-2-pve-x86_64-with-glibc2.31
Installer: None
Active plugins: [{'name': 'InvenTreeBarcode', 'slug': 'inventreebarcode', 'version': '2.1.0'}, {'name': 'InvenTreeCoreNotificationsPlugin', 'slug': 'inventreecorenotificationsplugin', 'version': '1.0.0'}, {'name': 'InvenTreeCurrencyExchange', 'slug': 'inventreecurrencyexchange', 'version': '1.0.0'}, {'name': 'InvenTreeLabel', 'slug': 'inventreelabel', 'version': '1.1.0'}, {'name': 'InvenTreeLabelMachine', 'slug': 'inventreelabelmachine', 'version': '1.0.0'}, {'name': 'InvenTreeLabelSheet', 'slug': 'inventreelabelsheet', 'version': '1.0.0'}, {'name': 'DigiKeyPlugin', 'slug': 'digikeyplugin', 'version': '1.0.0'}, {'name': 'LCSCPlugin', 'slug': 'lcscplugin', 'version': '1.0.0'}, {'name': 'MouserPlugin', 'slug': 'mouserplugin', 'version': '1.0.0'}, {'name': 'TMEPlugin', 'slug': 'tmeplugin', 'version': '1.0.0'}, {'name': 'IPNGenerator', 'slug': 'ipngen', 'version': '0.1'}]
### Please verify if you can reproduce this bug on the demo site.
- [ ] I can reproduce this bug on the demo site.
### Relevant log output
_No response_
|
closed
|
2024-11-08T09:51:12Z
|
2024-11-25T11:02:24Z
|
https://github.com/inventree/InvenTree/issues/8452
|
[
"bug",
"question"
] |
skydiablo
| 16
|
microsoft/MMdnn
|
tensorflow
| 104
|
can we directly convert tensorflow pb file to IR???
|
can we directly convert tensorflow pb file to IR???
|
closed
|
2018-03-14T03:28:55Z
|
2018-07-05T05:10:21Z
|
https://github.com/microsoft/MMdnn/issues/104
|
[
"enhancement"
] |
dinglong1020
| 3
|
KaiyangZhou/deep-person-reid
|
computer-vision
| 267
|
PyTorch is not using the GPU specified by gpu_devices
|
When I set gpu_devices = 0, it will run all gpus in the server. And I tried to put "CUDA_VISIBLE_DEVICES=0" before "python scripts/main.py", the same thing occurred.
|
closed
|
2019-12-02T05:27:57Z
|
2019-12-03T10:42:32Z
|
https://github.com/KaiyangZhou/deep-person-reid/issues/267
|
[] |
justopit
| 8
|
ryfeus/lambda-packs
|
numpy
| 35
|
Upgrade Scipy to v1.2.0
|
Thanks for sharing this, I've been using it for over a year.
Would it be possible to upgrade the sklearn/scipy/numpy bundle to use the latest Scipy version v1.2.0 ?
|
closed
|
2019-02-07T12:35:12Z
|
2019-02-08T11:00:39Z
|
https://github.com/ryfeus/lambda-packs/issues/35
|
[] |
tomaso909
| 2
|
SciTools/cartopy
|
matplotlib
| 2,063
|
Getting error Proj version 0.0.0 is installed, but cartopy requires at least version 8.0.0 (while trying to install cartopy)
|
### Description
Hello, I was trying to install cartopy so that I can import and use it on jupyter notebook and I am using a windows computer. However, when I try to install it using pip the following error comes. Can anyone please help me to find the best solution to solve this error? Thanks for your help.
#1981
</details>
C:\Users\barok> pip install cartopy
Collecting cartopy
Using cached Cartopy-0.20.3.tar.gz (10.8 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... error
ERROR: Command errored out with exit status 1:
command: 'C:\Users\barok\miniconda3\python.exe' 'C:\Users\barok\miniconda3\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py' get_requires_for_build_wheel 'C:\Users\barok\AppData\Local\Temp\tmpr6c28j1_'
cwd: C:\Users\barok\AppData\Local\Temp\pip-install-abtwr2_1\cartopy_61ba9a4cccce421bbb67eada722140b7
Complete output (3 lines):
<string>:117: UserWarning: Unable to determine GEOS version. Ensure you have 3.7.2 or later installed, or installation may fail.
<string>:166: UserWarning: Unable to determine Proj version. Ensure you have 8.0.0 or later installed, or installation may fail.
Proj version 0.0.0 is installed, but cartopy requires at least version 8.0.0.
----------------------------------------
WARNING: Discarding https://files.pythonhosted.org/packages/98/a9/0e4000eabadfcff6373c0fec790863b543b919cbfec18aed60d71ba67d5d/Cartopy-0.20.3.tar.gz#sha256=0d60fa2e2fbd77c4d1f6b1f9d3b588966147f07c1b179d2d34570ac1e1b49006 (from https://pypi.org/simple/cartopy/) (requires-python:>=3.7). Command errored out with exit status 1: 'C:\Users\barok\miniconda3\python.exe' 'C:\Users\barok\miniconda3\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py' get_requires_for_build_wheel 'C:\Users\barok\AppData\Local\Temp\tmpr6c28j1_' Check the logs for full command output.
Using cached Cartopy-0.20.2.tar.gz (10.8 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... error
ERROR: Command errored out with exit status 1:
command: 'C:\Users\barok\miniconda3\python.exe' 'C:\Users\barok\miniconda3\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py' get_requires_for_build_wheel 'C:\Users\barok\AppData\Local\Temp\tmp76fblfvu'
cwd: C:\Users\barok\AppData\Local\Temp\pip-install-abtwr2_1\cartopy_72929b0ca95c4775aa3c63cacbbd652e
Complete output (3 lines):
<string>:117: UserWarning: Unable to determine GEOS version. Ensure you have 3.7.2 or later installed, or installation may fail.
<string>:166: UserWarning: Unable to determine Proj version. Ensure you have 8.0.0 or later installed, or installation may fail.
Proj version 0.0.0 is installed, but cartopy requires at least version 8.0.0.
----------------------------------------
WARNING: Discarding https://files.pythonhosted.org/packages/0f/c0/58453b036e79046d211f083880d58dcce787e7e07647ac25dc46c6555099/Cartopy-0.20.0.tar.gz#sha256=eae58aff26806e63cf115b2bce9477cedc4aa9f578c5e477b2c25cfa404f2b7a (from https://pypi.org/simple/cartopy/) (requires-python:>=3.7). Command errored out with exit status 1: 'C:\Users\barok\miniconda3\python.exe' 'C:\Users\barok\miniconda3\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py' get_requires_for_build_wheel 'C:\Users\barok\AppData\Local\Temp\tmpcc19xy_c' Check the logs for full command output.
Using cached Cartopy-0.19.0.post1.tar.gz (12.1 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... error
ERROR: Command errored out with exit status 1:
command: 'C:\Users\barok\miniconda3\python.exe' 'C:\Users\barok\miniconda3\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py' get_requires_for_build_wheel 'C:\Users\barok\AppData\Local\Temp\tmp606auhag'
cwd: C:\Users\barok\AppData\Local\Temp\pip-install-abtwr2_1\cartopy_ccec1e993dd44c0d80035dacf2c46fd4
Complete output (3 lines):
<string>:117: UserWarning: Unable to determine GEOS version. Ensure you have 3.3.3 or later installed, or installation may fail.
<string>:166: UserWarning: Unable to determine Proj version. Ensure you have 4.9.0 or later installed, or installation may fail.
Proj version 0.0.0 is installed, but cartopy requires at least version 4.9.0.
----------------------------------------
WARNING: Discarding https://files.pythonhosted.org/packages/ed/ca/524ce33692df3faeaa56852fb6a33b0b410be94cc288417565b96fef3f64/Cartopy-0.19.0.post1.tar.gz#sha256=4b8b4773a98ed7009fe17d9b6ec87ac3ac62b7d14634d7768c190eadc647d576 (from https://pypi.org/simple/cartopy/) (requires-python:>=3.5). Command errored out with exit status 1: 'C:\Users\barok\miniconda3\python.exe' 'C:\Users\barok\miniconda3\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py' get_requires_for_build_wheel 'C:\Users\barok\AppData\Local\Temp\tmp606auhag' Check the logs for full command output.
Using cached Cartopy-0.18.0.tar.gz (14.4 MB)
ERROR: Command errored out with exit status 1:
command: 'C:\Users\barok\miniconda3\python.exe' -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\barok\\AppData\\Local\\Temp\\pip-install-abtwr2_1\\cartopy_f1e5d9d6f39f4a9caa97e7cc6d1c0c2e\\setup.py'"'"'; __file__='"'"'C:\\Users\\barok\\AppData\\Local\\Temp\\pip-install-abtwr2_1\\cartopy_f1e5d9d6f39f4a9caa97e7cc6d1c0c2e\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base 'C:\Users\barok\AppData\Local\Temp\pip-pip-egg-info-hwuvi3le'
cwd: C:\Users\barok\AppData\Local\Temp\pip-install-abtwr2_1\cartopy_f1e5d9d6f39f4a9caa97e7cc6d1c0c2e\
Complete output (5 lines):
C:\Users\barok\AppData\Local\Temp\pip-install-abtwr2_1\cartopy_f1e5d9d6f39f4a9caa97e7cc6d1c0c2e\setup.py:104: UserWarning: **Unable to determine GEOS version**. Ensure you have 3.3.3 or later installed, or installation may fail.
warnings.warn(
C:\Users\barok\AppData\Local\Temp\pip-install-abtwr2_1\cartopy_f1e5d9d6f39f4a9caa97e7cc6d1c0c2e\setup.py:157: UserWarning: Unable to determine Proj version. Ensure you have 4.9.0 or later installed, or installation may fail.
warnings.warn(
**Proj version 0.0.0 is installed**, but cartopy requires at least version 4.9.0.
PS C:\Users\barok> pip show proj
Name: proj
Version: 0.2.0
PS C:\Users\barok> pip show geos
Name: geos
Version: 0.2.3
|
closed
|
2022-07-31T14:12:58Z
|
2022-09-11T02:18:37Z
|
https://github.com/SciTools/cartopy/issues/2063
|
[] |
Barokirving1
| 4
|
matplotlib/matplotlib
|
matplotlib
| 29,385
|
[MNT]: new public method to help title positioning
|
### Summary
1. The change that broke #29381 would have been fine if the [logic that calculates `top` within `Axes._update_title_position`](https://github.com/matplotlib/matplotlib/blob/c11175d142403ff9af6e55ccb1feabccb990a7f6/lib/matplotlib/axes/_base.py#L3068-L3090) only needed to work for ordinary `Axes` instances.
2. Cartopy has artists separate from the `XAxis` and `YAxis` which must be considered for title placement. Currently, it does that by [overriding `_update_title_position`](https://github.com/SciTools/cartopy/blob/113be8ee587a6a57e20a5cc46bb27247b8f31fea/lib/cartopy/mpl/geoaxes.py#L511). Overriding a private method is not ideal, neither is repeating some of the code that is already in the parent method.
### Proposed fix
Factor out a public method that returns `top`. Making it public means it can safely be overridden in subclasses such as `PolarAxes` and Cartopy's `GeoAxes`. Knowing it can/should be overridden in subclasses means that in the parent `Axes` we can use the most efficient approach that works for `Axes`.
|
open
|
2024-12-29T15:06:57Z
|
2025-01-06T20:39:08Z
|
https://github.com/matplotlib/matplotlib/issues/29385
|
[
"New feature",
"topic: ticks axis labels",
"Maintenance"
] |
rcomer
| 2
|
aeon-toolkit/aeon
|
scikit-learn
| 2,190
|
[SAX_Fast] is STALE
|
@patrickzib,
SAX_Fast has had no activity for 146 days.
This branch will be automatically deleted in 29 days.
|
closed
|
2024-10-14T01:27:56Z
|
2024-11-18T01:28:18Z
|
https://github.com/aeon-toolkit/aeon/issues/2190
|
[
"stale branch"
] |
aeon-actions-bot[bot]
| 5
|
ultralytics/yolov5
|
pytorch
| 13,142
|
I observe that the validation phase is much slower than the training phase on large validation sets and multi-GPU machines
|
### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hello, dear author. I observed that validation was very slow using only one GPU regardless of how many Gpus there were. Here's a question I'd like to ask from a novice perspective: why not make the validation part multi-GPU parallel as well? Is it impossible or unnecessary or you don't have time to do it? Since I was recently looking for a way to reduce the validation part of the time, I was wondering if there was an existing solution that could save me some time. If not, I'm trying multi-GPU parallel validation, just like multi-GPU training. Does this work? Please forgive me if I have caused any offence
### Additional
_No response_
|
closed
|
2024-06-27T07:18:11Z
|
2024-10-20T19:48:59Z
|
https://github.com/ultralytics/yolov5/issues/13142
|
[
"question",
"Stale"
] |
ASharpSword
| 6
|
matterport/Mask_RCNN
|
tensorflow
| 3,022
|
Unresolved reference hdf5_format
|
open
|
2024-03-01T08:06:51Z
|
2024-03-01T08:06:51Z
|
https://github.com/matterport/Mask_RCNN/issues/3022
|
[] |
whh1204
| 0
|
|
faif/python-patterns
|
python
| 239
|
[question]should the borg design pattern override __deepcopy__() ?
|
- borg as i see it is equivalent of singleton for python
- but it does not handle deepcopy very well
- shouldn't there be a \_\_deepcopy\_\_ method?
- correct me if i am wrong
|
closed
|
2018-08-20T05:37:52Z
|
2019-02-08T13:42:29Z
|
https://github.com/faif/python-patterns/issues/239
|
[
"question"
] |
sak96
| 3
|
mckinsey/vizro
|
plotly
| 713
|
[Docs] Remove redundant provision of `id` in docs examples
|
We still have some examples where an `id` is provided to a component even though it is not required.
1. Look through the code examples in our docs e.g. `vizro-core/docs` and `vizro-ai/docs`
2. Remove the `id` from `vm.Graph`, `vm.Table`, `vm.AgGrid` or `vm.Card` if **it is not required**
#### When is it not required?
The `id` is normally not required if that component is not the target of any kind of action e.g. filter_interaction, export, filters or parameters. A good rule of thumb is, if the `id` appears only once in the entire app configuration, it's probably not required.
**Example of a redundant `id` provision** (and the first example where you can remove it from the docs):
In the first example the `id="scatter_chart"` is not required, because the Graph is not being targeted by any action. Also the `id` only appears once in the entire app configuration. In the second example it is required though, because it is now the target of the Filter.
```
from vizro import Vizro
import vizro.plotly.express as px
import vizro.models as vm
iris = px.data.iris()
page = vm.Page(
title="My first page",
components=[
vm.Graph(id="scatter_chart", figure=px.scatter(iris, x="sepal_length", y="petal_width", color="species")),
],
)
dashboard = vm.Dashboard(pages=[page])
Vizro().build(dashboard).run()
```
**Example where the `id` is required:**
```
from vizro import Vizro
import vizro.plotly.express as px
import vizro.models as vm
iris = px.data.iris()
page = vm.Page(
title="My first page",
components=[
vm.Graph(id="scatter_chart", figure=px.scatter(iris, x="sepal_length", y="petal_width", color="species")),
vm.Graph(id="scatter_chart2", figure=px.scatter(iris, x="petal_length", y="sepal_width", color="species")),
],
controls=[
vm.Filter(column="petal_length",targets=["scatter_chart"],selector=vm.RangeSlider(step=1)),
],
)
dashboard = vm.Dashboard(pages=[page])
Vizro().build(dashboard).run()
```
|
closed
|
2024-09-17T13:10:10Z
|
2024-11-25T14:37:34Z
|
https://github.com/mckinsey/vizro/issues/713
|
[
"Docs :spiral_notepad:",
"Good first issue :baby_chick:",
"hacktoberfest"
] |
huong-li-nguyen
| 3
|
apache/airflow
|
automation
| 47,702
|
Invalid keys in executor_config should raise an error
|
### Apache Airflow Provider(s)
cncf-kubernetes
### Versions of Apache Airflow Providers
10.0.1
### Apache Airflow version
2.10.5
### Operating System
Debian GNU/Linux 12 (bookworm)
### Deployment
Astronomer
### Deployment details
_No response_
### What happened
The `executor_config` dictionary used to configure worker pods with the Kubernetes executor does not raise an error if any keys besides "pod_override" or "pod_template_file" are used. Even with invalid keys, the DAG is imported and will run as expected. The invalid `executor_config` will simply be disregarded.
### What you think should happen instead
If any keys besides "pod_override" or "pod_template_file" are used in `executor_config`, and error (maybe a DAG import error) should be raised.
This should give the user feedback that their `executor_config` keys are invalid and need to be removed or changed.
### How to reproduce
1. Create an Airflow instance that uses the Kubernetes Executor.
2. Create a DAG with the following code including an invalid key in `executor_config`.
```
import datetime
from airflow.decorators import dag
from airflow.decorators import task
executor_config = {
"key": "value"
}
@dag(start_date=datetime.datetime(2024, 10, 1), schedule="@daily", catchup=False)
def dag_1():
@task(executor_config=executor_config)
def task_1():
print("task 1")
task_1()
dag_1()
```
3. Observe that the DAG will be imported and run on the Airflow instance without any issues.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
|
open
|
2025-03-12T22:16:51Z
|
2025-03-13T16:31:53Z
|
https://github.com/apache/airflow/issues/47702
|
[
"kind:bug",
"area:providers",
"provider:cncf-kubernetes"
] |
karenbraganz
| 2
|
PokeAPI/pokeapi
|
api
| 644
|
official_artwork is currently official-artwork
|
Hi - I was working with the API yesterday and noticed the API began to fail on my end of the application.
I noticed that after going back onto pokeapi.co that the response from a pokemon's official artwork sprite has changed from `official_artwork` to `official-artwork` which was causing the error, not sure if this was an intentional change or not but just flagging it :)
|
closed
|
2021-08-25T11:09:15Z
|
2021-08-25T14:53:59Z
|
https://github.com/PokeAPI/pokeapi/issues/644
|
[] |
OliverHeward
| 5
|
tflearn/tflearn
|
tensorflow
| 1,115
|
Binary prediction with LSTM
|
I'm trying to implement an LSTM model to make binary (or multiclass) classification from raw log data(Mooc courses log data -> user-level droput/grade prediction ). I have read lots of publication and tutorials which seems to be what I'm looking for, but couldn't find any example on how to use it.
Do you have a link or something about this topic? (RNN, ConvLSTM2D, LSTM, GRU on Keras or TF)
|
open
|
2019-01-30T16:09:31Z
|
2019-01-30T16:09:31Z
|
https://github.com/tflearn/tflearn/issues/1115
|
[] |
korosig
| 0
|
neuml/txtai
|
nlp
| 563
|
Dates fail in example
|
Hi,
I just tried the examples in the following page: https://neuml.github.io/txtai/embeddings/query/ and found it did something wrong with dates:
```
desktop:~/txtai$ python3
Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from txtai.embeddings import Embeddings
>>> embeddings = Embeddings(path="sentence-transformers/nli-mpnet-base-v2", content=True)
>>> embeddings.index([{"text": "text to index", "flag": True, "entry": "2022-01-01"}])
>>> embeddings.search("SELECT text, flag, entry FROM txtai WHERE similar('query') AND flag = 1 AND entry >= '2022-01-01'")
[{'text': 'text to index', 'flag': 1, 'entry': '2023-09-23 15:21:29.714090'}]
```
As you can see the act of inserting a date inserts the current date regardless of what's supplied (check the date of "entry" during insert and then on return from querying).
The expected result for the last "search" command should be
```
[{'text': 'text to index', 'flag': 1, 'entry': '2022-01-01 00:00:00.000000'}]
```
shouldn't it ?
|
closed
|
2023-09-23T14:25:44Z
|
2023-09-25T12:23:23Z
|
https://github.com/neuml/txtai/issues/563
|
[
"bug"
] |
rickknowles-cognitant
| 1
|
huggingface/text-generation-inference
|
nlp
| 2,572
|
OutOfMemory error running Meta-Llama-3.1-405B-Instruct-fp8 on 8xH100
|
### System Info
TGI version: 2.2.0 (but I tested 2.3.0 too)
Machine: 8x H100 (640 GPU RAM)
```
2024-09-25T14:29:44.260160Z INFO text_generation_launcher: Runtime environment:
Target: x86_64-unknown-linux-gnu
Cargo version: 1.79.0
Commit sha: db7e043ded45e14ed24188d5a963911c96049618
Docker label: sha-db7e043
nvidia-smi:
Wed Sep 25 14:29:43 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.154.05 Driver Version: 535.154.05 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA H100 80GB HBM3 On | 00000000:0F:00.0 Off | 0 |
| N/A 30C P0 114W / 700W | 0MiB / 81559MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 1 NVIDIA H100 80GB HBM3 On | 00000000:2D:00.0 Off | 0 |
| N/A 35C P0 120W / 700W | 0MiB / 81559MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 2 NVIDIA H100 80GB HBM3 On | 00000000:44:00.0 Off | 0 |
| N/A 31C P0 115W / 700W | 0MiB / 81559MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 3 NVIDIA H100 80GB HBM3 On | 00000000:5B:00.0 Off | 0 |
| N/A 36C P0 115W / 700W | 0MiB / 81559MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 4 NVIDIA H100 80GB HBM3 On | 00000000:89:00.0 Off | 0 |
| N/A 31C P0 114W / 700W | 0MiB / 81559MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 5 NVIDIA H100 80GB HBM3 On | 00000000:A8:00.0 Off | 0 |
| N/A 35C P0 118W / 700W | 0MiB / 81559MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 6 NVIDIA H100 80GB HBM3 On | 00000000:C0:00.0 Off | 0 |
| N/A 36C P0 116W / 700W | 0MiB / 81559MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 7 NVIDIA H100 80GB HBM3 On | 00000000:D8:00.0 Off | 0 |
| N/A 32C P0 116W / 700W | 0MiB / 81559MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| No running processes found |
+---------------------------------------------------------------------------------------+
xpu-smi:
N/A
```
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modifications
### Reproduction
1. My deployment yaml:
```
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: tgi-llama
name: tgi-llama
namespace: llama-31
spec:
selector:
matchLabels:
app: tgi-llama
template:
metadata:
labels:
app: tgi-llama
spec:
containers:
- name: tgi-llama
image: "ghcr.io/huggingface/text-generation-inference:2.2.0"
args: ["--model-id", "meta-llama/Meta-Llama-3.1-405B-Instruct-fp8", "--sharded", "true", "--num-shard ", "8", "--env"]
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: 100
memory: 1000G
nvidia.com/gpu: 8
ports:
- containerPort: 80
volumeMounts:
- mountPath: /data
name: tgi-llama-disk
- mountPath: /dev/shm
name: dshm
env:
- name: HUGGING_FACE_HUB_TOKEN
value: ""
- name: MAX_TOTAL_TOKENS
value: "13107"
- name: MAX_INPUT_LENGTH
value: "500"
- name: MAX_BATCH_PREFILL_TOKENS
value: "550"
- name: HUGGINGFACE_HUB_CACHE
value: "/data"
restartPolicy: Always
volumes:
- name: tgi-llama-disk
persistentVolumeClaim:
claimName: tgi-llama-disk
- name: dshm
emptyDir:
medium: Memory
tolerations:
- key: "nvidia.com/gpu"
operator: "Exists"
effect: NoSchedule
- key: "model"
operator: "Equal"
effect: NoSchedule
value: "llama31"
```
2. Logs:
```
2024-09-25T14:29:44.260191Z INFO text_generation_launcher: Args {
model_id: "meta-llama/Meta-Llama-3.1-405B-Instruct-fp8",
revision: None,
validation_workers: 2,
sharded: None,
num_shard: Some(
8,
),
quantize: None,
speculate: None,
dtype: None,
trust_remote_code: false,
max_concurrent_requests: 128,
max_best_of: 2,
max_stop_sequences: 4,
max_top_n_tokens: 5,
max_input_tokens: None,
max_input_length: Some(
500,
),
max_total_tokens: Some(
13107,
),
waiting_served_ratio: 0.3,
max_batch_prefill_tokens: Some(
550,
),
max_batch_total_tokens: None,
max_waiting_tokens: 20,
max_batch_size: None,
cuda_graphs: None,
hostname: "tgi-llama-6dfd4d944f-vmdkw",
port: 80,
shard_uds_path: "/tmp/text-generation-server",
master_addr: "localhost",
master_port: 29500,
huggingface_hub_cache: Some(
"/data",
),
weights_cache_override: None,
disable_custom_kernels: false,
cuda_memory_fraction: 1.0,
rope_scaling: None,
rope_factor: None,
json_output: false,
otlp_endpoint: None,
otlp_service_name: "text-generation-inference.router",
cors_allow_origin: [],
watermark_gamma: None,
watermark_delta: None,
ngrok: false,
ngrok_authtoken: None,
ngrok_edge: None,
tokenizer_config_path: None,
disable_grammar_support: false,
env: true,
max_client_batch_size: 4,
lora_adapters: None,
disable_usage_stats: false,
disable_crash_reports: false,
}
2024-09-25T14:29:44.260260Z INFO hf_hub: Token file not found "/root/.cache/huggingface/token"
2024-09-25T14:29:44.441323Z INFO text_generation_launcher: Using default cuda graphs [1, 2, 4, 8, 16, 32]
2024-09-25T14:29:44.441331Z INFO text_generation_launcher: Sharding model on 8 processes
2024-09-25T14:29:44.441452Z INFO download: text_generation_launcher: Starting check and download process for meta-llama/Meta-Llama-3.1-405B-Instruct-fp8
2024-09-25T15:00:51.799015Z INFO download: text_generation_launcher: Successfully downloaded weights for meta-llama/Meta-Llama-3.1-405B-Instruct-fp8
2024-09-25T15:00:51.799235Z INFO shard-manager: text_generation_launcher: Starting shard rank=0
2024-09-25T15:00:51.799251Z INFO shard-manager: text_generation_launcher: Starting shard rank=1
2024-09-25T15:00:51.799601Z INFO shard-manager: text_generation_launcher: Starting shard rank=2
2024-09-25T15:00:51.800066Z INFO shard-manager: text_generation_launcher: Starting shard rank=3
2024-09-25T15:00:51.800097Z INFO shard-manager: text_generation_launcher: Starting shard rank=4
2024-09-25T15:00:51.801546Z INFO shard-manager: text_generation_launcher: Starting shard rank=5
2024-09-25T15:00:51.801585Z INFO shard-manager: text_generation_launcher: Starting shard rank=6
2024-09-25T15:00:51.802622Z INFO shard-manager: text_generation_launcher: Starting shard rank=7
2024-09-25T15:00:56.515337Z INFO text_generation_launcher: Auto selecting quantization method fp8
2024-09-25T15:01:01.806057Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2024-09-25T15:01:01.807285Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=1
2024-09-25T15:01:01.807322Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=2
2024-09-25T15:01:01.807360Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=4
2024-09-25T15:01:01.808804Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=3
2024-09-25T15:01:01.809297Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=6
2024-09-25T15:01:01.809605Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=7
2024-09-25T15:01:01.814302Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=5
2024-09-25T15:01:05.514208Z INFO text_generation_launcher: Using FBGEMM fp8 optimized kernels
2024-09-25T15:04:30.363596Z INFO text_generation_launcher: Server started at unix:///tmp/text-generation-server-2
2024-09-25T15:04:30.371516Z INFO text_generation_launcher: Server started at unix:///tmp/text-generation-server-3
2024-09-25T15:04:30.372803Z INFO text_generation_launcher: Server started at unix:///tmp/text-generation-server-4
2024-09-25T15:04:30.372919Z INFO text_generation_launcher: Server started at unix:///tmp/text-generation-server-5
2024-09-25T15:04:30.372927Z INFO text_generation_launcher: Server started at unix:///tmp/text-generation-server-7
2024-09-25T15:04:30.373540Z INFO text_generation_launcher: Server started at unix:///tmp/text-generation-server-0
2024-09-25T15:04:30.373927Z INFO text_generation_launcher: Server started at unix:///tmp/text-generation-server-1
2024-09-25T15:04:30.420621Z INFO shard-manager: text_generation_launcher: Shard ready in 218.618910525s rank=4
2024-09-25T15:04:30.426690Z INFO shard-manager: text_generation_launcher: Shard ready in 218.622944116s rank=7
2024-09-25T15:04:30.427452Z INFO shard-manager: text_generation_launcher: Shard ready in 218.62400201s rank=5
2024-09-25T15:04:30.444388Z INFO shard-manager: text_generation_launcher: Shard ready in 218.644204722s rank=0
2024-09-25T15:04:30.460515Z INFO shard-manager: text_generation_launcher: Shard ready in 218.658884257s rank=2
2024-09-25T15:04:30.460530Z INFO shard-manager: text_generation_launcher: Shard ready in 218.658891373s rank=1
2024-09-25T15:04:30.460532Z INFO shard-manager: text_generation_launcher: Shard ready in 218.657400525s rank=3
2024-09-25T15:04:30.556841Z INFO text_generation_launcher: Starting Webserver
2024-09-25T15:04:30.664794Z INFO text_generation_router: router/src/main.rs:228: Using the Hugging Face API
2024-09-25T15:04:30.664836Z INFO hf_hub: /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/hf-hub-0.3.2/src/lib.rs:55: Token file not found "/root/.cache/huggingface/token"
2024-09-25T15:04:31.378511Z INFO text_generation_router: router/src/main.rs:577: Serving revision 2147c7e74f1bf338ad11843e450ee174df547589 of model meta-llama/Meta-Llama-3.1-405B-Instruct-FP8
2024-09-25T15:04:31.597861Z INFO text_generation_router: router/src/main.rs:357: Using config Some(Llama)
2024-09-25T15:04:31.597869Z WARN text_generation_router: router/src/main.rs:384: Invalid hostname, defaulting to 0.0.0.0
2024-09-25T15:04:31.851898Z INFO text_generation_router::server: router/src/server.rs:1572: Warming up model
2024-09-25T15:04:33.037820Z INFO text_generation_launcher: Cuda Graphs are enabled for sizes [32, 16, 8, 4, 2, 1]
2024-09-25T15:04:34.456876Z ERROR text_generation_launcher: Method Warmup encountered an error.
Traceback (most recent call last):
2024-09-25T15:04:34.519240Z ERROR text_generation_launcher: Method Warmup encountered an error.
Traceback (most recent call last):
File "/opt/conda/bin/text-generation-server", line 8, in <module>
sys.exit(app())
File "/opt/conda/lib/python3.10/site-packages/typer/main.py", line 311, in __call__
return get_command(self)(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/typer/core.py", line 778, in main
return _main(
File "/opt/conda/lib/python3.10/site-packages/typer/core.py", line 216, in _main
rv = self.invoke(ctx)
File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 1688, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/typer/main.py", line 683, in wrapper
return callback(**use_params) # type: ignore
File "/opt/conda/lib/python3.10/site-packages/text_generation_server/cli.py", line 118, in serve
server.serve(
File "/opt/conda/lib/python3.10/site-packages/text_generation_server/server.py", line 297, in serve
asyncio.run(
File "/opt/conda/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/opt/conda/lib/python3.10/asyncio/base_events.py", line 636, in run_until_complete
self.run_forever()
File "/opt/conda/lib/python3.10/asyncio/base_events.py", line 603, in run_forever
self._run_once()
File "/opt/conda/lib/python3.10/asyncio/base_events.py", line 1909, in _run_once
handle._run()
File "/opt/conda/lib/python3.10/asyncio/events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "/opt/conda/lib/python3.10/site-packages/grpc_interceptor/server.py", line 165, in invoke_intercept_method
return await self.intercept(
> File "/opt/conda/lib/python3.10/site-packages/text_generation_server/interceptor.py", line 21, in intercept
return await response
File "/opt/conda/lib/python3.10/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py", line 120, in _unary_interceptor
raise error
File "/opt/conda/lib/python3.10/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py", line 111, in _unary_interceptor
return await behavior(request_or_iterator, context)
File "/opt/conda/lib/python3.10/site-packages/text_generation_server/server.py", line 125, in Warmup
max_supported_total_tokens = self.model.warmup(batch)
File "/opt/conda/lib/python3.10/site-packages/text_generation_server/models/flash_causal_lm.py", line 1196, in warmup
self.cuda_graph_warmup(bs, max_s, max_bt)
File "/opt/conda/lib/python3.10/site-packages/text_generation_server/models/flash_causal_lm.py", line 1065, in cuda_graph_warmup
with torch.cuda.graph(graph, pool=MEM_POOL):
File "/opt/conda/lib/python3.10/site-packages/torch/cuda/graphs.py", line 184, in __exit__
self.cuda_graph.capture_end()
File "/opt/conda/lib/python3.10/site-packages/torch/cuda/graphs.py", line 82, in capture_end
super().capture_end()
RuntimeError: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
2024-09-25T15:04:34.598137Z ERROR warmup{max_input_length=500 max_prefill_tokens=550 max_total_tokens=13107 max_batch_size=None}:warmup: text_generation_client: router/client/src/lib.rs:46: Server error: CANCELLED
2024-09-25T15:04:34.617895Z ERROR warmup{max_input_length=500 max_prefill_tokens=550 max_total_tokens=13107 max_batch_size=None}:warmup: text_generation_client: router/client/src/lib.rs:46: Server error: CANCELLED
2024-09-25T15:04:34.650181Z ERROR warmup{max_input_length=500 max_prefill_tokens=550 max_total_tokens=13107 max_batch_size=None}:warmup: text_generation_client: router/client/src/lib.rs:46: Server error: CANCELLED
2024-09-25T15:04:34.677632Z ERROR warmup{max_input_length=500 max_prefill_tokens=550 max_total_tokens=13107 max_batch_size=None}:warmup: text_generation_client: router/client/src/lib.rs:46: Server error: CANCELLED
2024-09-25T15:04:34.680492Z ERROR warmup{max_input_length=500 max_prefill_tokens=550 max_total_tokens=13107 max_batch_size=None}:warmup: text_generation_client: router/client/src/lib.rs:46: Server error: CANCELLED
2024-09-25T15:04:34.701973Z ERROR warmup{max_input_length=500 max_prefill_tokens=550 max_total_tokens=13107 max_batch_size=None}:warmup: text_generation_client: router/client/src/lib.rs:46: Server error: CANCELLED
2024-09-25T15:04:34.707007Z ERROR warmup{max_input_length=500 max_prefill_tokens=550 max_total_tokens=13107 max_batch_size=None}:warmup: text_generation_client: router/client/src/lib.rs:46: Server error: CANCELLED
2024-09-25T15:04:34.713119Z ERROR warmup{max_input_length=500 max_prefill_tokens=550 max_total_tokens=13107 max_batch_size=None}:warmup: text_generation_client: router/client/src/lib.rs:46: Server error: CANCELLED
Error: WebServer(Warmup(Generation("CANCELLED")))
2024-09-25T15:04:34.954646Z ERROR text_generation_launcher: Webserver Crashed
2024-09-25T15:04:34.954664Z INFO text_generation_launcher: Shutting down shards
2024-09-25T15:04:34.963134Z INFO shard-manager: text_generation_launcher: Terminating shard rank=2
2024-09-25T15:04:34.963148Z INFO shard-manager: text_generation_launcher: Terminating shard rank=3
2024-09-25T15:04:34.963165Z INFO shard-manager: text_generation_launcher: Terminating shard rank=1
2024-09-25T15:04:34.964271Z INFO shard-manager: text_generation_launcher: Waiting for shard to gracefully shutdown rank=2
2024-09-25T15:04:34.964340Z INFO shard-manager: text_generation_launcher: Waiting for shard to gracefully shutdown rank=3
2024-09-25T15:04:34.964421Z INFO shard-manager: text_generation_launcher: Waiting for shard to gracefully shutdown rank=1
2024-09-25T15:04:35.023355Z INFO shard-manager: text_generation_launcher: Terminating shard rank=4
2024-09-25T15:04:35.024172Z INFO shard-manager: text_generation_launcher: Waiting for shard to gracefully shutdown rank=4
2024-09-25T15:04:35.029462Z INFO shard-manager: text_generation_launcher: Terminating shard rank=7
2024-09-25T15:04:35.030347Z INFO shard-manager: text_generation_launcher: Waiting for shard to gracefully shutdown rank=7
2024-09-25T15:04:35.030945Z INFO shard-manager: text_generation_launcher: Terminating shard rank=6
2024-09-25T15:04:35.032281Z INFO shard-manager: text_generation_launcher: Waiting for shard to gracefully shutdown rank=6
2024-09-25T15:04:35.032512Z INFO shard-manager: text_generation_launcher: Terminating shard rank=5
2024-09-25T15:04:35.034027Z INFO shard-manager: text_generation_launcher: Waiting for shard to gracefully shutdown rank=5
2024-09-25T15:04:35.047083Z INFO shard-manager: text_generation_launcher: Terminating shard rank=0
2024-09-25T15:04:35.047903Z INFO shard-manager: text_generation_launcher: Waiting for shard to gracefully shutdown rank=0
2024-09-25T15:04:35.364752Z INFO shard-manager: text_generation_launcher: shard terminated rank=3
2024-09-25T15:04:35.465564Z INFO shard-manager: text_generation_launcher: shard terminated rank=1
2024-09-25T15:04:35.764901Z INFO shard-manager: text_generation_launcher: shard terminated rank=2
2024-09-25T15:04:35.931027Z INFO shard-manager: text_generation_launcher: shard terminated rank=7
2024-09-25T15:04:36.024913Z INFO shard-manager: text_generation_launcher: shard terminated rank=4
2024-09-25T15:04:36.248767Z INFO shard-manager: text_generation_launcher: shard terminated rank=0
2024-09-25T15:04:36.333451Z INFO shard-manager: text_generation_launcher: shard terminated rank=6
2024-09-25T15:04:36.635381Z INFO shard-manager: text_generation_launcher: shard terminated rank=5
Error: WebserverFailed
```
### Expected behavior
Meta-Llama-3.1-405B-Instruct-fp8 starts with at least 10k token.
I'm aware that there are reported problems with llama3.1 to run with full context 128k, but I can't even go with 500 due to OOM error.
Meta-Llama-3.1-405B-Instruct-fp8 requires 400 GPU RAM to start the model and my chine contains totally 640 so I thought it should be sufficient value.
|
open
|
2024-09-26T08:29:18Z
|
2024-12-10T00:36:38Z
|
https://github.com/huggingface/text-generation-inference/issues/2572
|
[] |
ad01bl
| 3
|
noirbizarre/flask-restplus
|
flask
| 748
|
Duplicate "doc" and "root" endpoints on reregistering blueprints?
|
As the title says.. I think this might be a bug
### **Code**
##### api_v1.py
```python
from flask import Blueprint
from flask_restplus import Api
blueprint = Blueprint('api_v1', __name__, url_prefix='/api/v1')
api = Api(blueprint, title='My Title',
version='1.0',
description='A description',
doc='/docs')
```
##### app.py
```python
import copy
from flask import Flask
app = Flask(__name__)
# ...
from api_v1 import blueprint
blueprint_copy = copy.copy(blueprint)
blueprint_copy.name = "api" # Renamed for collision
app.register_blueprint(blueprint, url_prefix="/api")
app.register_blueprint(blueprint_copy, url_prefix="/api_copy")
```
### **Repro Steps** (if applicable)
1. Setup a flask app with flask restplus.
2. Register the blueprint twice (same as code).
3. `$ flask routes`
### **Expected Behavior**
Endpoints should be only defined once.
### **Actual Behavior**
Endpoints **api.doc** and **api.root** are shown to be defined twice.
### **Error Messages/Stack Trace**
The following is what I get when I type `flask routes` in my specific app terminal.
```cmd
Endpoint Methods Rule
------------------- --------- --------------------------
api.doc GET /api/docs
api.root GET /api/
api.specs GET /api/swagger.json
api_v1.doc GET /api_copy/docs
api_v1.doc GET /api_copy/docs
api_v1.root GET /api_copy/
api_v1.root GET /api_copy/
api_v1.specs GET /api_copy/swagger.json
index GET /
restplus_doc.static GET /swaggerui/<path:filename>
static GET /static/<path:filename>
```
### **Environment**
- Python version **3.7.3**
- Flask version **1.1.1**
- Flask-RESTPlus version **0.13.0**
- Requirements.txt:
```
alembic==1.3.0
aniso8601==8.0.0
astroid==2.3.2
attrs==19.3.0
Click==7.0
colorama==0.4.1
Flask==1.1.1
Flask-Login==0.4.1
Flask-Migrate==2.5.2
flask-restplus==0.13.0
Flask-SQLAlchemy==2.4.1
Flask-WTF==0.14.2
importlib-metadata==0.23
isort==4.3.21
itsdangerous==1.1.0
Jinja2==2.10.3
jsonschema==3.1.1
lazy-object-proxy==1.4.3
Mako==1.1.0
MarkupSafe==1.1.1
mccabe==0.6.1
more-itertools==7.2.0
pylint==2.4.3
pylint-flask==0.6
pylint-flask-sqlalchemy==0.1.0
pylint-plugin-utils==0.6
pyrsistent==0.15.5
python-dateutil==2.8.0
python-dotenv==0.10.3
python-editor==1.0.4
pytz==2019.3
six==1.12.0
SQLAlchemy==1.3.10
typed-ast==1.4.0
Werkzeug==0.16.0
wrapt==1.11.2
WTForms==2.2.1
zipp==0.6.0
```
|
closed
|
2019-11-04T12:15:27Z
|
2019-11-15T11:38:45Z
|
https://github.com/noirbizarre/flask-restplus/issues/748
|
[
"bug"
] |
knno
| 0
|
brightmart/text_classification
|
nlp
| 49
|
There is no file named "test-zhihu-forpredict-title-desc-v6.txt" in the Hierarchical Attention Network
|
There is no file named "test-zhihu-forpredict-title-desc-v6.txt",when i run the p1_HierarchicalAttention_predict.py in the Hierarchical Attention Network. Also i have tried to use the test-zhihu6-title-desc.txt instead, but there will be an error. Can you give me some advice? @brightmart
|
closed
|
2018-04-29T13:34:35Z
|
2018-05-02T08:25:16Z
|
https://github.com/brightmart/text_classification/issues/49
|
[] |
Fannjh
| 1
|
JaidedAI/EasyOCR
|
deep-learning
| 512
|
Suggestion: setup.py
|
It would be nice if the `install` command could carry an optional argument to install torch with cuda, the default is CPU. This will save time when installing packages specially for those who have bad internet connection :).
It's just a suggestion 😄 Thank you either ways!!
|
closed
|
2021-08-09T12:39:12Z
|
2022-03-02T09:25:32Z
|
https://github.com/JaidedAI/EasyOCR/issues/512
|
[] |
NinaM31
| 0
|
plotly/dash
|
dash
| 2,984
|
[QUESTION] Does Dash have an official logo
|
Does Dash have a purely official logo image that doesn't include the word `plotly`.
|
closed
|
2024-09-05T09:17:24Z
|
2024-09-26T16:19:08Z
|
https://github.com/plotly/dash/issues/2984
|
[
"feature",
"P2"
] |
CNFeffery
| 5
|
vi3k6i5/flashtext
|
nlp
| 6
|
remove keyword feature
|
```
from flashtext.keyword import KeywordProcessor
keyword_processor = KeywordProcessor()
keyword_processor.add_keyword('NCR Area')
```
Can we also have a feature to remove the keyword.
```
keyword_processor.remove_keyword('NCR Area')
```
So we can add it back in a different form.
```
keyword_processor.add_keyword('NCR Region')
```
Use case: We have a distributed processing system where there is a central ontology layer. Ontology layer has 10K keywords. When the ontology cache is updated for one value, we don't want to restart all apps/workers, or rebuild the `KeywordProcessor()` all over again. Just want to take out one key and add back another key.
|
closed
|
2017-09-14T17:15:47Z
|
2017-09-25T17:30:32Z
|
https://github.com/vi3k6i5/flashtext/issues/6
|
[
"enhancement"
] |
vi3k6i5
| 1
|
Lightning-AI/pytorch-lightning
|
data-science
| 19,980
|
autocast to float16/bfloat16 fails on transformer encoder
|
### Bug description
`bf16` precision in Trainer yields an error
### What version are you seeing the problem on?
v2.3
### How to reproduce the bug
My model includes this encoder:
```python
self.encoder = nn.Sequential(
nn.Flatten(start_dim=2),
nn.Dropout(0.15),
nn.Linear(math.prod(pose_dims), hidden_dim, bias=False),
PositionalEncoding(d_model=hidden_dim),
nn.TransformerEncoder(
nn.TransformerEncoderLayer(d_model=hidden_dim, nhead=nhead,
dim_feedforward=dim_feedforward,
batch_first=True),
num_layers=num_layers
)
)
```
Then, run the Trainer with `precision="bf16-mixed"`
(Note! "bf16-true" works, but yields a very bad learning curve)
### Error messages and logs
```
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/Users/amitmoryossef/dev/sign-language-processing/vq/sign_vq/train.py", line 147, in <module>
main()
File "/Users/amitmoryossef/dev/sign-language-processing/vq/sign_vq/train.py", line 143, in main
trainer.fit(model, train_dataloaders=train_dataset, val_dataloaders=validation_dataset)
File "/opt/homebrew/anaconda3/envs/vq/lib/python3.11/site-packages/pytorch_lightning/trainer/trainer.py", line 543, in fit
call._call_and_handle_interrupt(
File "/opt/homebrew/anaconda3/envs/vq/lib/python3.11/site-packages/pytorch_lightning/trainer/call.py", line 44, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/vq/lib/python3.11/site-packages/pytorch_lightning/trainer/trainer.py", line 579, in _fit_impl
self._run(model, ckpt_path=ckpt_path)
File "/opt/homebrew/anaconda3/envs/vq/lib/python3.11/site-packages/pytorch_lightning/trainer/trainer.py", line 986, in _run
results = self._run_stage()
^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/vq/lib/python3.11/site-packages/pytorch_lightning/trainer/trainer.py", line 1028, in _run_stage
self._run_sanity_check()
File "/opt/homebrew/anaconda3/envs/vq/lib/python3.11/site-packages/pytorch_lightning/trainer/trainer.py", line 1057, in _run_sanity_check
val_loop.run()
File "/opt/homebrew/anaconda3/envs/vq/lib/python3.11/site-packages/pytorch_lightning/loops/utilities.py", line 182, in _decorator
return loop_run(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/vq/lib/python3.11/site-packages/pytorch_lightning/loops/evaluation_loop.py", line 135, in run
self._evaluation_step(batch, batch_idx, dataloader_idx, dataloader_iter)
File "/opt/homebrew/anaconda3/envs/vq/lib/python3.11/site-packages/pytorch_lightning/loops/evaluation_loop.py", line 396, in _evaluation_step
output = call._call_strategy_hook(trainer, hook_name, *step_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/vq/lib/python3.11/site-packages/pytorch_lightning/trainer/call.py", line 311, in _call_strategy_hook
output = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/vq/lib/python3.11/site-packages/pytorch_lightning/strategies/strategy.py", line 411, in validation_step
return self.lightning_module.validation_step(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/amitmoryossef/dev/sign-language-processing/vq/sign_vq/model.py", line 234, in validation_step
loss, prediction = self.step(batch)
^^^^^^^^^^^^^^^^
File "/Users/amitmoryossef/dev/sign-language-processing/vq/sign_vq/model.py", line 215, in step
x_hat, indices = self(x)
^^^^^^^
File "/opt/homebrew/anaconda3/envs/vq/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/vq/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/amitmoryossef/dev/sign-language-processing/vq/sign_vq/model.py", line 170, in forward
return self.model(batch)
^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/vq/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/vq/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/amitmoryossef/dev/sign-language-processing/vq/sign_vq/model.py", line 129, in forward
x = self.encoder(x)
^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/vq/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/vq/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/vq/lib/python3.11/site-packages/torch/nn/modules/container.py", line 217, in forward
input = module(input)
^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/vq/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/vq/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/vq/lib/python3.11/site-packages/torch/nn/modules/transformer.py", line 391, in forward
output = mod(output, src_mask=mask, is_causal=is_causal, src_key_padding_mask=src_key_padding_mask_for_layers)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/vq/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/vq/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/vq/lib/python3.11/site-packages/torch/nn/modules/transformer.py", line 685, in forward
return torch._transformer_encoder_layer_fwd(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: mat1 and mat2 must have the same dtype, but got BFloat16 and Float
```
### Environment
<details>
<summary>Current environment</summary>
* CUDA:
- GPU: None
- available: False
- version: None
* Lightning:
- lightning-utilities: 0.11.2
- pytorch-lightning: 2.3.0
- torch: 2.2.2
- torchmetrics: 1.4.0.post0
- vector-quantize-pytorch: 1.14.24
* Packages:
- aiohttp: 3.9.5
- aiosignal: 1.3.1
- astroid: 3.2.2
- attrs: 23.2.0
- certifi: 2024.6.2
- charset-normalizer: 3.3.2
- click: 8.1.7
- datasets: 2.20.0
- decorator: 4.4.2
- dill: 0.3.8
- docker-pycreds: 0.4.0
- einops: 0.8.0
- einx: 0.3.0
- filelock: 3.15.1
- frozendict: 2.4.4
- frozenlist: 1.4.1
- fsspec: 2024.5.0
- gitdb: 4.0.11
- gitpython: 3.1.43
- huggingface-hub: 0.23.3
- idna: 3.7
- imageio: 2.34.1
- imageio-ffmpeg: 0.5.1
- iniconfig: 2.0.0
- isort: 5.13.2
- jinja2: 3.1.4
- lightning-utilities: 0.11.2
- markupsafe: 2.1.5
- mccabe: 0.7.0
- moviepy: 1.0.3
- mpmath: 1.3.0
- multidict: 6.0.5
- multiprocess: 0.70.16
- networkx: 3.3
- numpy: 1.26.4
- opencv-python: 4.10.0.82
- packaging: 24.1
- pandas: 2.2.2
- pillow: 10.3.0
- pip: 24.0
- platformdirs: 4.2.2
- pluggy: 1.5.0
- pose-format: 0.4.1
- proglog: 0.1.10
- protobuf: 5.27.1
- psutil: 5.9.8
- pyarrow: 16.1.0
- pyarrow-hotfix: 0.6
- pylint: 3.2.3
- pytest: 8.2.2
- python-dateutil: 2.9.0.post0
- pytorch-lightning: 2.3.0
- pytz: 2024.1
- pyyaml: 6.0.1
- requests: 2.32.3
- scipy: 1.13.1
- sentry-sdk: 2.5.1
- setproctitle: 1.3.3
- setuptools: 69.5.1
- sign-vq: 0.0.1
- six: 1.16.0
- smmap: 5.0.1
- sympy: 1.12.1
- tomlkit: 0.12.5
- torch: 2.2.2
- torchmetrics: 1.4.0.post0
- tqdm: 4.66.4
- typing-extensions: 4.12.2
- tzdata: 2024.1
- urllib3: 2.2.1
- vector-quantize-pytorch: 1.14.24
- wandb: 0.17.1
- wheel: 0.43.0
- xxhash: 3.4.1
- yarl: 1.9.4
* System:
- OS: Darwin
- architecture:
- 64bit
-
- processor: i386
- python: 3.11.9
- release: 23.5.0
- version: Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000
</details>
### More info
I tried to follow https://github.com/Lightning-AI/pytorch-lightning/issues/15006
and feed the batch directly as `bf16`. that does not change the error
|
closed
|
2024-06-16T07:28:11Z
|
2024-08-04T09:46:35Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/19980
|
[
"bug",
"question"
] |
AmitMY
| 4
|
minimaxir/textgenrnn
|
tensorflow
| 80
|
UnicodeEncodeError: 'ascii' codec can't encode character
|
I just got around to testing textgenrnn over ROCm, and upon generating some text, I encountered this error:
```
File "/root/textgenrnn/textgenrnn/textgenrnn.py", line 89, in generate
print("{}\n".format(gen_text))
UnicodeEncodeError: 'ascii' codec can't encode character '\u201c' in position 51: ordinal not in range(128)
```
I could quickly fix it based on [this answer](https://stackoverflow.com/a/25402141/635587), although I have a feeling that this is not a proper solution.
|
open
|
2018-11-17T21:20:32Z
|
2019-01-31T21:48:17Z
|
https://github.com/minimaxir/textgenrnn/issues/80
|
[] |
torokati44
| 1
|
seleniumbase/SeleniumBase
|
pytest
| 3,346
|
understanding the cpd mode click
|
with SB(uc=True,maximize=True) as sb:
url = "https://google.com"
sb.activate_cdp_mode(url)
sb.sleep(2)
sb.cdp.find_element('[name="q"]')
for i in "facebook.com": sb.sleep(0.5) ,sb.cdp.press_keys('[name="q"]', i)
# Wait for the search input field to be visible
sb.cdp.click('input[class="gNO89b"]')
sb.cdp.sleep(4)
try:
sb.cdp.click('div[class="sjVJQd"]')
except:
print("box not found")
sb.cdp.sleep(2)
sb.cdp.click('h3[class="LC20lb MBeuO DKV0Md"]')
sb.cdp.sleep(60)
hello i am using SB CDP for automation to reproduce web traffic but in the 2022 the traffic is appeared in google search console as a real traffic but today when i use it it didnt appear any clicks on google search console
i want to know what is the difference the logic of click using seleniumbase from today and in year 2022
finally thanks for developing this project
|
closed
|
2024-12-17T12:39:40Z
|
2024-12-17T15:51:28Z
|
https://github.com/seleniumbase/SeleniumBase/issues/3346
|
[
"question",
"UC Mode / CDP Mode"
] |
pythondeveloperz
| 1
|
rthalley/dnspython
|
asyncio
| 648
|
processing_order breaks for HTTPS without "priming" using a suitable extraneous query
|
Applying *processing_order()* to the RRset returned in response to an HTTPS query causes an AttributeError.
However, if an extraneous (eg. MX) query is placed, this is no longer the case, even without issuing a fresh HTTPS query.
Interactive session shown below illustrates this.
```
vagrant@vagrant:~$ pip3 show dnspython
Name: dnspython
Version: 2.1.0
Summary: DNS toolkit
Home-page: http://www.dnspython.org
Author: Bob Halley
Author-email: halley@dnspython.org
License: ISC
Location: /usr/local/lib/python3.6/dist-packages
Requires:
vagrant@vagrant:~$ python3
Python 3.6.9 (default, Jan 26 2021, 15:33:00)
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import dns.resolver
>>> ans = dns.resolver.resolve('crypto.cloudflare.com', 'https')
>>> ans.rrset.processing_order()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.6/dist-packages/dns/rdataset.py", line 324, in processing_order
return self[0]._processing_order(iter(self))
File "/usr/local/lib/python3.6/dist-packages/dns/rdtypes/svcbbase.py", line 543, in _processing_order
return dns.rdtypes.util.priority_processing_order(iterable)
AttributeError: module 'dns.rdtypes' has no attribute 'util'
>>> alt = dns.resolver.resolve('github.com', 'mx')
>>> ans.rrset.processing_order()
[<DNS IN HTTPS rdata: 1 . alpn="h2" ipv4hint="162.159.135.79,162.159.136.79" echconfig="AEf+CQBDABNjbG91ZGZsYXJlLWVzbmkuY29tACCwkoUYgWT6cX2qc5RjgnyS9SgXaz51fKkzOqJr1g6tPQAgAAQAAQABAAAAAA==" ipv6hint="2606:4700:7::a29f:874f,2606:4700:7::a29f:884f">]
>>>
>>>
vagrant@vagrant:~$
```
|
closed
|
2021-03-09T12:40:16Z
|
2021-03-09T22:00:04Z
|
https://github.com/rthalley/dnspython/issues/648
|
[
"Bug",
"Fixed"
] |
niallor
| 2
|
coqui-ai/TTS
|
pytorch
| 4,045
|
A portable version is great
|
Hello admin and everyone
For many people like me who don't use code clearly, a portable version on windows is great
Can anyone make a portble version for the community?
Thank you very much
|
closed
|
2024-11-03T23:56:17Z
|
2024-12-28T11:58:23Z
|
https://github.com/coqui-ai/TTS/issues/4045
|
[
"wontfix",
"feature request"
] |
kerlynla
| 2
|
SYSTRAN/faster-whisper
|
deep-learning
| 688
|
distil + word_timestamps=True => CRASH
|
Hello,
When using [this finetuned version of distil whisper](https://huggingface.co/bofenghuang/whisper-large-v3-french-distil-dec16) and trying to use `word_timestamps=True` it crashes when starting the transcription, no issue when `word_timestamps=False`
It's a CRASH, not a python error, it straight exits the python instance, no crash message, nothing, just byebye amigo hasta la vista
|
closed
|
2024-02-15T12:43:00Z
|
2024-11-19T23:18:56Z
|
https://github.com/SYSTRAN/faster-whisper/issues/688
|
[] |
ExtReMLapin
| 4
|
biolab/orange3
|
numpy
| 6,980
|
Possible help improvement
|
There are plenty on awesome teaching videos on Youtube made by the Biolab.
The widget help has already links to Wikipedia and probably other sites.
Should we include links as part of the Example sections to relevant Youtube videos from the Biolab channel? If this would be useful I can start adding them.
|
closed
|
2025-01-06T17:58:54Z
|
2025-01-10T16:10:56Z
|
https://github.com/biolab/orange3/issues/6980
|
[] |
borondics
| 1
|
gevent/gevent
|
asyncio
| 2,036
|
Socket timeouts when using gevent
|
* gevent version: `gevent==24.2.1` from PyPI (can reproduce on previous versions)
* greenlet version: `greenlet==3.0.3` from PyPI (can reproduce on previous versions)
* Python version: `Python 3.8.10` from `apt`
* Operating System: `uname -a` returns `aarch64 aarch64 aarch64 GNU/Linux`
### Description:
**REPLACE ME**: What are you trying to get done, what has happened, what went wrong, and what did you expect?
Hey there! We're running a Python + Django + gunicon + wsgi stack, pretty high traffic (thousands of QPS). Recently, we noticed this stack trace starting to occur. First off, reading https://www.gevent.org/api/gevent.greenlet.html, we think
> [gevent.Greenlet](https://www.gevent.org/api/gevent.greenlet.html#gevent.Greenlet) is a light-weight cooperatively-scheduled execution unit. It is a more powerful version of [greenlet.greenlet](https://www.gevent.org/api/gevent.greenlet.html#gevent.greenlet.greenlet). For general information, see [Lightweight pseudothreads](https://www.gevent.org/intro.html#greenlet-basics).
this means we should be filing with you and not greenlet.
Next, we're reading up on gevent internals to get a sense of what's going on here. I traced this code from [here](https://github.com/gevent/gevent/blob/master/src/gevent/_hub_primitives.py#L295) to [here](https://github.com/gevent/gevent/blob/master/src/gevent/_greenlet_primitives.py#L65) but I'm a bit confused here. Because it looks like we do
```python
from greenlet import greenlet
locals()['_greenlet_switch'] = greenlet.switch
```
so we are using the greenlet library here, which makes me wonder if I **should file this with greenlet instead?**
Anyways, from the trace itself, as I understand it, we're making a network request, so gevent tells this Greenlet thread to start waiting while we I/O, and switch to another greenlet thread. It looks like we finish waiting and try to get a thread to switch into [here](https://github.com/gevent/gevent/blob/master/src/gevent/_hub_primitives.py#L55). But it looks like when we try to perform the switch to the new thread [here](https://github.com/gevent/gevent/blob/master/src/gevent/_gevent_c_greenlet_primitives.pxd#L35) but we fail due to socket timeout. And unfortunately the stacktrace ends here. Does this directly link with this greenlet code [here](https://github.com/python-greenlet/greenlet/blob/937f150e07823ee03344aeeb5111c0bb371a831d/src/greenlet/greenlet.cpp#L889)? **Am I understanding the trace correctly?** Another potential understanding would be that the greenlet thread we try to wait on threw a socket.timeout. After reading the code I think this is not what happened, but could use a confirmation.
Is this just a simple case of switching back to an existing waiting greenlet just to find that the network request has timed out and the socket closed? Or is there something deeper going on here? We're still investigating this internally but would love to get some expertise / starting guidance / previous experience.
While we are seeing this consistently, it may be hard to reproduce deterministically, since as far as we can tell, there's no consistent input factors that cause this besides potentially scale.
```python-traceback
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/urllib3/connectionpool.py", line 426, in _make_request
six.raise_from(e, None)
File "<string>", line 3, in raise_from
File "/usr/local/lib/python3.8/dist-packages/urllib3/connectionpool.py", line 421, in _make_request
httplib_response = conn.getresponse()
File "/usr/lib/python3.8/http/client.py", line 1348, in getresponse
response.begin()
File "/usr/lib/python3.8/http/client.py", line 316, in begin
version, status, reason = self._read_status()
File "/usr/lib/python3.8/http/client.py", line 277, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "/usr/lib/python3.8/socket.py", line 669, in readinto
return self._sock.recv_into(b)
File "/usr/local/lib/python3.8/dist-packages/gevent/_socketcommon.py", line 696, in recv_into
self._wait(self._read_event)
File "src/gevent/_hub_primitives.py", line 317, in gevent._gevent_c_hub_primitives.wait_on_socket
File "src/gevent/_hub_primitives.py", line 322, in gevent._gevent_c_hub_primitives.wait_on_socket
File "src/gevent/_hub_primitives.py", line 313, in gevent._gevent_c_hub_primitives._primitive_wait
File "src/gevent/_hub_primitives.py", line 314, in gevent._gevent_c_hub_primitives._primitive_wait
File "src/gevent/_hub_primitives.py", line 46, in gevent._gevent_c_hub_primitives.WaitOperationsGreenlet.wait
File "src/gevent/_hub_primitives.py", line 46, in gevent._gevent_c_hub_primitives.WaitOperationsGreenlet.wait
File "src/gevent/_hub_primitives.py", line 55, in gevent._gevent_c_hub_primitives.WaitOperationsGreenlet.wait
File "src/gevent/_waiter.py", line 154, in gevent._gevent_c_waiter.Waiter.get
File "src/gevent/_greenlet_primitives.py", line 61, in gevent._gevent_c_greenlet_primitives.SwitchOutGreenletWithLoop.switch
File "src/gevent/_greenlet_primitives.py", line 61, in gevent._gevent_c_greenlet_primitives.SwitchOutGreenletWithLoop.switch
File "src/gevent/_greenlet_primitives.py", line 65, in gevent._gevent_c_greenlet_primitives.SwitchOutGreenletWithLoop.switch
File "src/gevent/_gevent_c_greenlet_primitives.pxd", line 35, in gevent._gevent_c_greenlet_primitives._greenlet_switch
socket.timeout: timed out
```
### What I've run:
in `guincorn.py`
```python
worker_class = "gevent"
...
# Enable Gevent coroutine optimization with our third-party libraries
def on_starting(server):
import grpc.experimental.gevent as grpc_gevent # 1.45.0rc1
import psycogreen.gevent # 1.0
psycogreen.gevent.patch_psycopg()
grpc_gevent.init_gevent()
```
note: some names are replaced
in <business_logic>.py
```python
@traced(name="run_a_bunch_of_functions_in_parallel", inject_span=True)
def run_a_bunch_of_functions_in_parallel(
a_bunch_of_functions: list[Callable[[], None]], span: Span
) -> None:
span.set_tag("num_functions.count", len(a_bunch_of_functions))
workers = [gevent.spawn(_thread, fn, span) for fn in a_bunch_of_functions]
gevent.joinall(workers, timeout=settings.SOME_GLOBAL_TIMEOUT) # 5 seconds
_rethrow_exceptions_if_any(workers)
def _rethrow_exceptions_if_any(workers: list[Greenlet]) -> None:
for worker in workers:
if isinstance(worker.value, WorkerException):
einfo = worker.value.args
try:
raise einfo[0](einfo[1]).with_traceback(einfo[2])
except AttributeError:
raise einfo[0](einfo[1])
```
|
closed
|
2024-06-08T19:31:57Z
|
2024-10-10T09:43:27Z
|
https://github.com/gevent/gevent/issues/2036
|
[
"Type: Question"
] |
wayne-li2
| 5
|
sqlalchemy/sqlalchemy
|
sqlalchemy
| 10,632
|
hybrid_method and Postgres array filtering
|
### Describe the bug
Can't use `hybrid_method` or `hybrid_property` to filter result set.
### Optional link from https://docs.sqlalchemy.org which documents the behavior that is expected
https://docs.sqlalchemy.org/en/20/orm/extensions/hybrid.html
### SQLAlchemy Version in Use
2.0.20
### DBAPI (i.e. the database driver)
asyncpg
### Database Vendor and Major Version
PostgreSQL 14
### Python Version
3.11
### Operating system
Linux
### To Reproduce
```python
class ClientContactsNotification(BaseModel):
__tablename__ = "client_contacts_notification"
__table_args__ = {"schema": "delivery"}
id: Mapped[int] = mapped_column(Integer, primary_key=True)
id_post_operation: Mapped[int] = mapped_column(
ForeignKey("delivery.post_operation.id")
)
phones: Mapped[List[str]] = mapped_column(ARRAY(String))
emails: Mapped[List[str]] = mapped_column(ARRAY(String))
identifiers: Mapped[List[str]] = mapped_column(ARRAY(String))
@hybrid_property
def phones_str(self):
return ",".join(self.phones)
@hybrid_method
def phone_like(self, phone: str) -> bool:
return any([phone in item for item in self.phones])
@pytest.mark.parametrize(argnames="q", argvalues=["11111111"])
async def test_client_db_contacts(plain_db_session, q):
stmt = select(ClientContactsNotification)
stmt = stmt.filter(
ClientContactsNotification.phone_like(q) == True,
)
result = await plain_db_session.execute(stmt)
data = result.scalars.all()
assert data
```
### Error
```
/tests/test_orders.py::test_client_db_contacts[9222298749] Failed: [undefined]NotImplementedError: Operator 'contains' is not supported on this expression
plain_db_session = <sqlalchemy.ext.asyncio.session.AsyncSession object at 0x7f057edf1450>
q = '9222298749'
@pytest.mark.parametrize(argnames="q", argvalues=["11111111"])
async def test_client_db_contacts(plain_db_session, q):
stmt = select(ClientContactsNotification)
stmt = stmt.filter(
> ClientContactsNotification.phone_like(q),
)
tests/test_orders.py:85:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/database/models/delivery.py:235: in phone_like
return any([phone in item for item in self.phones])
src/database/models/delivery.py:235: in <listcomp>
return any([phone in item for item in self.phones])
.venv/lib64/python3.11/site-packages/sqlalchemy/sql/operators.py:657: in __contains__
return self.operate(contains, other)
.venv/lib64/python3.11/site-packages/sqlalchemy/sql/elements.py:1616: in operate
return op(self.comparator, *other, **kwargs) # type: ignore[no-any-return] # noqa: E501
.venv/lib64/python3.11/site-packages/sqlalchemy/sql/operators.py:657: in __contains__
return self.operate(contains, other)
.venv/lib64/python3.11/site-packages/sqlalchemy/sql/type_api.py:194: in operate
return op_fn(self.expr, op, *other, **addtl_kw)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
expr = <sqlalchemy.sql.elements.BinaryExpression object at 0x7f057f0c4c10>
op = <built-in function contains>, arg = ('11111111',), kw = {}
def _unsupported_impl(
expr: ColumnElement[Any], op: OperatorType, *arg: Any, **kw: Any
) -> NoReturn:
> raise NotImplementedError(
"Operator '%s' is not supported on " "this expression" % op.__name__
)
E NotImplementedError: Operator 'contains' is not supported on this expression
.venv/lib64/python3.11/site-packages/sqlalchemy/sql/default_comparator.py:250: NotImplementedError
```
### Additional context
In current version there is no solution to compare each element of array to value or use `like` operator
|
closed
|
2023-11-14T16:09:24Z
|
2023-11-14T20:24:07Z
|
https://github.com/sqlalchemy/sqlalchemy/issues/10632
|
[] |
rsaleev
| 0
|
babysor/MockingBird
|
pytorch
| 402
|
我用aidatatang_200zh数据训练了150k 发现模拟出来的声音全是电流
|
我用aidatatang_200zh数据训练了150k 发现模拟出来的声音全都是电流 声音模糊不清
|
open
|
2022-02-25T05:38:35Z
|
2022-04-29T14:08:59Z
|
https://github.com/babysor/MockingBird/issues/402
|
[] |
907811175
| 4
|
supabase/supabase-py
|
fastapi
| 801
|
When using asynchronous client(acreate_client) error "detail": "'coroutine' object has no attribute 'auth'"
|
# Bug report
## Describe the bug
when you create the client with acreate_client and use async functions that awaits for the supabase.auth.signin(credentials) you get the error above
A clear and concise description of what the bug is.
## To Reproduce
Steps to reproduce the behavior, please provide code snippets or a repository:
1. Go to '…'
2. Click on '…'
3. Scroll down to '…'
4. See error
## Expected behavior
A clear and concise description of what you expected to happen.
## Screenshots
If applicable, add screenshots to help explain your problem.
## System information
- OS: [e.g. macOS, Windows]
- Browser (if applies) [e.g. chrome, safari]
- Version of supabase-js: [e.g. 6.0.2]
- Version of Node.js: [e.g. 10.10.0]
## Additional context
Add any other context about the problem here.
|
closed
|
2024-05-16T14:35:41Z
|
2024-05-21T21:42:05Z
|
https://github.com/supabase/supabase-py/issues/801
|
[
"bug"
] |
Bradkibs
| 1
|
gradio-app/gradio
|
machine-learning
| 10,606
|
413 Payload Too Large Error When Using Chatbot Share Button in Multistep Agent UI
|
### Describe the bug
Using the share button in the open-deep-research chatbot component triggers a 413 (Payload Too Large) error from CloudFront.
This happens when trying to share agent response (usually with more than four agent steps) to Hugging Face Spaces Discussions using the share button. However, no error occurs if the agent response is small.
**Error Message**
```
413 ERROR
The request could not be satisfied.
Bad request. We can't connect to the server for this app or website at this time. There might be too much traffic or a configuration error.
Generated by cloudfront (CloudFront)
Request ID: lFs_dgTYdKp1rZUj8S7bKI1lTA_4XBgccTL_KbtRDRX-D2WDIJtCcw==
```
**Suggested Solutions**
1. It would be helpful to have either built-in handling for large payloads or adding docs about size limitations for the share functionality.
2. Can we add a `gr.Warning` when content exceeds the shareable limits
3. One possible solution could be to offer a configurable parameter that allows sharing only the last N messages. However, if the agent's response is too lengthy for certain chats, sharing with `N=1` might still result in a 413 Error.
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
1. Create a Gradio chatbot with `show_share_button=True`
2. Accumulate larger chat history through conversation
3. Click the share button in the chatbot component
4. Observe the 413 error from CloudFront
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
- Gradio Version: `5.16.0`
- Error is encountered on Spaces
```
### Severity
I can work around it
|
closed
|
2025-02-17T13:59:32Z
|
2025-02-20T20:13:59Z
|
https://github.com/gradio-app/gradio/issues/10606
|
[
"bug",
"💬 Chatbot"
] |
yvrjsharma
| 0
|
agronholm/anyio
|
asyncio
| 503
|
Thread leaks in async tests marked with @pytest.mark.anyio
|
This is because TestRunner doesn't call `loop.shutdown_default_executor()` at the end of a test. It calls [`loop.close()`](https://github.com/agronholm/anyio/blob/master/src/anyio/_backends/_asyncio.py#L1730-L1733) that [doesn't join threads (`wait=False`)](https://github.com/python/cpython/blob/3.11/Lib/asyncio/base_events.py#L684).
A cleaner approach would call `loop.shutdown_default_executor()` right after [shutting down asyncgens](https://github.com/agronholm/anyio/blob/master/src/anyio/_backends/_asyncio.py#L1730). The approach is adopted in `IsolatedAsyncioTestCase` from the standard library: `asyncio.Runner.close` [called on tear down](https://github.com/python/cpython/blob/3.11/Lib/unittest/async_case.py#L124-L126) shuts down the loop's [default executor](https://github.com/python/cpython/blob/3.11/Lib/asyncio/runners.py#L73).
|
closed
|
2022-11-23T08:07:51Z
|
2022-11-26T15:29:48Z
|
https://github.com/agronholm/anyio/issues/503
|
[] |
marcinbarczynski
| 0
|
voila-dashboards/voila
|
jupyter
| 1,307
|
[Voila] WARNING | Unrecognized alias: 'ip', it will have no effect.
|
How to bind ip to 0.0.0.0?
voila --ip=0.0.0.0
not work
```bash
(py311) ubuntu@VM-4-12-ubuntu:~/notebook$ voila --ip=0.0.0.0
[Voila] WARNING | Unrecognized alias: 'ip', it will have no effect.
[Voila] Using /tmp to store connection files
[Voila] Storing connection files in /tmp/voila_k36ohco_.
[Voila] Serving static files from /home/ubuntu/miniconda3/envs/py311/lib/python3.11/site-packages/voila/static.
[Voila] Voilà is running at:
http://localhost:8866/
```
voila version 0.4.0
|
closed
|
2023-03-19T05:38:35Z
|
2023-03-19T11:48:13Z
|
https://github.com/voila-dashboards/voila/issues/1307
|
[
"bug"
] |
wukan1986
| 2
|
sgl-project/sglang
|
pytorch
| 3,874
|
[Bug] `pip install sglang` no longer installs all dependencies of the frontend
|
### Checklist
- [x] 1. I have searched related issues but cannot get the expected help.
- [x] 2. The bug has not been fixed in the latest version.
- [x] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
- [ ] 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [x] 5. Please use English, otherwise it will be closed.
### Describe the bug
The documentation suggests using `pip install sglang` to set up the dependencies for frontend language and that's correct only for versions before (not including) [v0.4.1.post7](https://github.com/sgl-project/sglang/releases/tag/v0.4.1.post7). The [PR that moves `Runtime` to frontend](https://github.com/sgl-project/sglang/pull/2990) breaks the situation by introducing a new dependency `aiohttp`, which is neither necessary for powering the frontend nor included in the dependencies.
This regression was originally discovered by @starwolves123 in an internal project.
### Reproduction
Just start from a new virtual environment and run `pip install sglang`.
`from sglang import RuntimeEndpoint` will raise `ModuleNotFoundError: No module named 'aiohttp'`.
### Environment
This script is not really runnable because it imports `torch` without checking for its existence. As far as I can tell, it's basically equal to have `sglang` and [these dependencies](https://github.com/sgl-project/sglang/blob/3dc9ff3ce8bb88dcbcf2655f616bd5439f224c11/python/pyproject.toml#L16) installed, which doesn't contain `aiohttp`.
|
closed
|
2025-02-26T06:52:31Z
|
2025-02-26T08:25:47Z
|
https://github.com/sgl-project/sglang/issues/3874
|
[] |
stevapple
| 3
|
jina-ai/clip-as-service
|
pytorch
| 275
|
why the same sentence show different embedding?
|
Hi,
I try to embed 200000 sentence, and I use two way to embed as follow:
one is that "data" contain all the list of string:
with open('a1') as fp:
data = [v.strip('\n') for v in fp]
vectors = bc.encode(data)
the other is that "line" contain one list of string:
line = f3.readline()
line = line.strip('\n')
while line:
vectors = bc.encode([line])
line = f3.readline()
but results show that the embedding of the same sentence are different in different way mentioned above.
anyone tell me why?
Here is my code:
import sys
import time
from bert_serving.client import BertClient
if __name__ == '__main__':
bc = BertClient(port=int(sys.argv[1]), port_out=int(sys.argv[2]), show_server_config=True, output_fmt='list')
with open('a1') as fp:
data = [v.strip('\n') for v in fp]
vectors = bc.encode(data)
f3 = open("a1")
f4 = open("output2.txt", 'a+')
line = f3.readline()
line = line.strip('\n')
print([line])
while line:
vectors = bc.encode([line])
f4.write(str(vectors))
line = f3.readline()
f3.close()
f4.close()
|
closed
|
2019-03-15T04:40:28Z
|
2019-03-15T09:21:01Z
|
https://github.com/jina-ai/clip-as-service/issues/275
|
[] |
wingsyuan
| 6
|
davidsandberg/facenet
|
computer-vision
| 279
|
how to use my own trained model?
|
stackoverflow : https://stackoverflow.com/questions/44017147/how-to-use-my-own-trained-model-with-facenet-implemented-in-tensorflow
|
closed
|
2017-05-17T06:49:28Z
|
2017-07-15T17:12:04Z
|
https://github.com/davidsandberg/facenet/issues/279
|
[] |
pine-byte
| 2
|
mljar/mercury
|
data-visualization
| 58
|
Reading notebook without utf-8 encoding
|
Hi,
Thank you for your amazing work !
I have a problem when I try to convert my notebook with mercury run, I have the following message :
Error during notebook initialization. 'charmap' codec can't decode byte 0x9d in position 3522922: character maps to <undefined>
the notebook runs fine in jupyter and I have no issues ... I can't understand from where it can come from ...
Please help !
Thank you in advance.
Best Regards
|
closed
|
2022-03-10T08:44:33Z
|
2022-03-15T21:13:56Z
|
https://github.com/mljar/mercury/issues/58
|
[
"bug"
] |
doubianimehdi
| 18
|
plotly/dash-core-components
|
dash
| 717
|
allow Location component to target window.parent.location
|
A common pattern is to embed a Dash app inside an iframe within a Flask app. Linking from inside the nested Dash app to redirect the parent window to a route in the Flask app can be done using an anchor tag with `target="_parent"`.
This can't be achieved however with a redirect from a callback that changes the `href` of a `Location` component, because the `Location` component is limited to targeting `window.location`. A potential extension could be to allow a user to specify targeting `window.parent.location`, potentially with a `target` prop that imitates the `target` attribute on anchor tags.
See this [motivating context in the Dash forum](https://community.plot.ly/t/getting-out-of-an-iframe/32508), where someone is trying to have clicking on different cells in a DataTable redirect the parent window to routes in the Flask app.
|
open
|
2019-12-16T07:41:50Z
|
2019-12-16T07:43:20Z
|
https://github.com/plotly/dash-core-components/issues/717
|
[] |
ned2
| 0
|
google-research/bert
|
nlp
| 1,082
|
how to used fine tuned model as initial checkpoint for another task?
|
Hi,
I fined tuned a classification model with 19 classes and then I add several new classes and i want to use the old model as an initial checkpoint to fine tune the new model. After i point the initial checkpoint in the training command to the previous fined model, i got this error
```ValueError: Shape of variable loss/output_bias:0 ((23,)) doesn't match with shape of tensor loss/output_bias ([19]) from checkpoint reader.```
What is the correct way to save the fine tuned model in order to accomplish this? Thank you!
|
closed
|
2020-05-10T08:16:13Z
|
2020-05-11T21:33:55Z
|
https://github.com/google-research/bert/issues/1082
|
[] |
bohanbo
| 0
|
mwaskom/seaborn
|
matplotlib
| 2,982
|
Line + Band with variables that Band does not support is awkward
|
`Band` does not support `linestyle`, so this plot is wrong in a confusing way:
```python
(
so.Plot(fmri, "timepoint", "signal", color="region", linestyle="event")
.add(so.Line(), so.Agg())
.add(so.Band(), so.Est())
)
```
<img width="495" alt="image" src="https://user-images.githubusercontent.com/315810/187090315-f0d2a444-b92b-4519-984b-f75d818e2ea7.png">
One needs to do this:
```python
(
so.Plot(fmri, "timepoint", "signal", color="region", linestyle="event")
.add(so.Line(), so.Agg())
.add(so.Band(), so.Est(), group="event")
)
```
<img width="500" alt="image" src="https://user-images.githubusercontent.com/315810/187089845-5b15af88-1b12-46ce-b5dc-ff3532c0dc5a.png">
Perhaps the stat grouping should use any variables defined at the common level or in that layer, not just those the mark accepts?
This will have some implications as we address #2911
|
open
|
2022-08-28T18:58:16Z
|
2022-08-28T18:59:52Z
|
https://github.com/mwaskom/seaborn/issues/2982
|
[
"rough-edge",
"objects-plot"
] |
mwaskom
| 0
|
PrefectHQ/prefect
|
data-science
| 17,225
|
The Flow diagram cannot be displayed when Prefect is deployed locally
|
### Bug summary
When trying to deploy locally by referring to the quick start document (https://docs.prefect.io/v3/get-started/quickstart), I cannot see the Flow image

### Version info
```Text
Version: 3.2.6
API version: 0.8.4
Python version: 3.10.13
Git commit: 5ceb3ada
Built: Wed, Feb 19, 2025 9:24 PM
OS/Arch: linux/x86_64
Profile: local
Server type: server
Pydantic version: 2.9.2
```
### Additional context
_No response_
|
closed
|
2025-02-21T07:28:14Z
|
2025-03-21T01:55:30Z
|
https://github.com/PrefectHQ/prefect/issues/17225
|
[
"bug"
] |
Moonquakes
| 5
|
hyperspy/hyperspy
|
data-visualization
| 2,927
|
Inversion of indices in axes_manager.set_axis
|
**This was reported by @magnunor in https://github.com/hyperspy/hyperspy/pull/2830#issuecomment-1086916555:**
I tested `axes_manager.set_axis`, and there seems to be an "inversion" of the indices:
```python
import numpy as np
import hyperspy.api as hs
s0 = hs.signals.Signal1D(np.zeros((5, 10, 15)))
s0.axes_manager.navigation_axes[0].scale = 0.1
s0.axes_manager.navigation_axes[1].scale = 0.2
s1 = hs.signals.Signal1D(np.zeros((5, 10, 20)))
s1.axes_manager.set_axis(s0.axes_manager.navigation_axes[0], 0)
s1.axes_manager.set_axis(s0.axes_manager.navigation_axes[1], 1)
```
```python
print(s0.axes_manager)
<Axes manager, axes: (10, 5|15)>
Name | size | index | offset | scale | units
================ | ====== | ====== | ======= | ======= | ======
<undefined> | 10 | 7 | 0 | 0.1 | <undefined>
<undefined> | 5 | 3 | 0 | 0.2 | <undefined>
---------------- | ------ | ------ | ------- | ------- | ------
<undefined> | 15 | 0 | 0 | 1 | <undefined>
print(s1.axes_manager)
<Axes manager, axes: (5, 10|20)>
Name | size | index | offset | scale | units
================ | ====== | ====== | ======= | ======= | ======
<undefined> | 5 | 3 | 0 | 0.2 | <undefined>
<undefined> | 10 | 7 | 0 | 0.1 | <undefined>
---------------- | ------ | ------ | ------- | ------- | ------
<undefined> | 20 | 0 | 0 | 1 | <undefined>
```
---------------------
Ergo, to properly "copy" the navigation axes:
```python
s1.axes_manager.set_axis(s0.axes_manager.navigation_axes[0], 1)
s1.axes_manager.set_axis(s0.axes_manager.navigation_axes[1], 0)
```
|
open
|
2022-04-15T10:18:29Z
|
2022-04-15T10:19:00Z
|
https://github.com/hyperspy/hyperspy/issues/2927
|
[
"type: bug?"
] |
jlaehne
| 1
|
sloria/TextBlob
|
nlp
| 152
|
Translation issues
|
I made a very easy script to play around with the translation module:
```
from textblob import TextBlob
en_text = TextBlob('You shall find of the king a husband, madam; you, sir, a father: he that so generally is at all times good must of necessity hold his virtue to you; whose worthiness would stir it up where it wanted rather than lack it where there is such abundance.')
nl_text = en_text.translate(from_lang='en', to='nl')
print(nl_text)
```
But this results in a couple of errors of which I hardly can make any sense:
```
Traceback (most recent call last):
File "C:\Users\Gebruiker\Desktop\TEXTBLOW.py", line 4, in <module>
nl_text = en_text.translate(from_lang='en', to='nl')
File "C:\Users\Gebruiker\AppData\Local\Programs\Python\Python35\lib\site-packages\textblob\blob.py", line 505, in translate
from_lang=from_lang, to_lang=to))
File "C:\Users\Gebruiker\AppData\Local\Programs\Python\Python35\lib\site-packages\textblob\translate.py", line 52, in translate
response = self._request(self.url, host=host, type_=type_, data=data)
File "C:\Users\Gebruiker\AppData\Local\Programs\Python\Python35\lib\site-packages\textblob\translate.py", line 92, in _request
resp = request.urlopen(req)
File "C:\Users\Gebruiker\AppData\Local\Programs\Python\Python35\lib\urllib\request.py", line 162, in urlopen
return opener.open(url, data, timeout)
File "C:\Users\Gebruiker\AppData\Local\Programs\Python\Python35\lib\urllib\request.py", line 471, in open
response = meth(req, response)
File "C:\Users\Gebruiker\AppData\Local\Programs\Python\Python35\lib\urllib\request.py", line 581, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Users\Gebruiker\AppData\Local\Programs\Python\Python35\lib\urllib\request.py", line 503, in error
result = self._call_chain(*args)
File "C:\Users\Gebruiker\AppData\Local\Programs\Python\Python35\lib\urllib\request.py", line 443, in _call_chain
result = func(*args)
File "C:\Users\Gebruiker\AppData\Local\Programs\Python\Python35\lib\urllib\request.py", line 686, in http_error_302
return self.parent.open(new, timeout=req.timeout)
File "C:\Users\Gebruiker\AppData\Local\Programs\Python\Python35\lib\urllib\request.py", line 471, in open
response = meth(req, response)
File "C:\Users\Gebruiker\AppData\Local\Programs\Python\Python35\lib\urllib\request.py", line 581, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Users\Gebruiker\AppData\Local\Programs\Python\Python35\lib\urllib\request.py", line 509, in error
return self._call_chain(*args)
File "C:\Users\Gebruiker\AppData\Local\Programs\Python\Python35\lib\urllib\request.py", line 443, in _call_chain
result = func(*args)
File "C:\Users\Gebruiker\AppData\Local\Programs\Python\Python35\lib\urllib\request.py", line 589, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 503: Service Unavailable
```
|
closed
|
2017-02-16T12:31:03Z
|
2017-02-16T12:34:07Z
|
https://github.com/sloria/TextBlob/issues/152
|
[] |
DutchDandy
| 0
|
littlecodersh/ItChat
|
api
| 290
|
群里被别人@时,为什么有的会自动空格,有的却是\u2005?
|
都是iphone,都是微信6.5的情况下,机器人在群里被别人@时,为什么有的在名字后会自动空格,有的却是\u2005?
比如:
@机器人 空格 内容
@机器人 \u2005 内容
|
closed
|
2017-03-19T14:04:30Z
|
2017-03-22T07:49:49Z
|
https://github.com/littlecodersh/ItChat/issues/290
|
[
"question"
] |
pengyuwei
| 1
|
jupyterlab/jupyter-ai
|
jupyter
| 437
|
Use JSON mode with /generate command for models that support it
|
<!-- Welcome! Thank you for contributing. These HTML comments will not render in the issue, but you can delete them once you've read them if you prefer! -->
<!--
Thanks for thinking of a way to improve JupyterLab. If this solves a problem for you, then it probably solves that problem for lots of people! So the whole community will benefit from this request.
Before creating a new feature request please search the issues for relevant feature requests.
-->
### Problem
Some generative models, when asked to generate JSON, may generate output that is not valid JSON. This will cause the `/generate` command to fail.
### Proposed Solution
For models that support it (see below) enable JSON mode when a user runs the `/generate` command.
### Additional context
As of November 6, 2023, OpenAI's `gpt-4-vision-preview` and `gpt-3.5-turbo` models support [JSON mode](https://platform.openai.com/docs/guides/text-generation/json-mode). Clients must include the string `"JSON"` in their system message, and clients using these models must set `response_format` to `{ type: "json_object" }`, to enable this mode.
See #435 for another issue related to generative models that fail to output valid JSON.
|
open
|
2023-11-06T23:35:00Z
|
2023-11-06T23:35:53Z
|
https://github.com/jupyterlab/jupyter-ai/issues/437
|
[
"enhancement",
"scope:chat-ux",
"scope:generate"
] |
JasonWeill
| 0
|
gradio-app/gradio
|
machine-learning
| 10,201
|
Accordion - Expanding vertically to the right
|
- [x] I have searched to see if a similar issue already exists.
I would really like to have the ability to place an accordion vertically and expand to the right. I have scenarios where this would be a better UI solution, as doing so would automatically push the other components to the right of it forward.
I have no idea how to tweak this in CSS to make it work. If you have a simple CSS solution I would appreciate it until we have this feature. I am actually developing something that would really need this feature.
I made this drawing of what it would be like.

|
closed
|
2024-12-14T23:51:34Z
|
2024-12-16T16:46:38Z
|
https://github.com/gradio-app/gradio/issues/10201
|
[] |
elismasilva
| 1
|
Lightning-AI/pytorch-lightning
|
deep-learning
| 20,171
|
Inconsistent input io type between `to_onnx` and `torch.onnx.export`.
|
### Bug description
Currently the filetype supported in `torch.onnx.export` includes `io.BytesIO`, whereas in `lightning`, it only supports `str` and `PathLike` object. Before lightning `2.3.3` , passing a `BytesIO` wouldn't be a problem since `to_onnx` did not do anything with `file_path`, but since this version, it changed by passing `str(file_path)`, which will cause problems when passing an `BytesIO` instance.
### What version are you seeing the problem on?
v2.3
### How to reproduce the bug
```python
from io import BytesIO
model = LightningModel()
onnx_io = BytesIO()
model.to_onnx(onnx_io)
```
### Error messages and logs
```
OSError: [Errno 22] Invalid argument: '<_io.BytesIO object at 0x000002487558E3B0>'
```
### Environment
<details>
<summary>Current environment</summary>
```
#- PyTorch Lightning Version (e.g., 2.4.0):
#- PyTorch Version (e.g., 2.4):
#- Python version (e.g., 3.12):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
```
</details>
### More info
_No response_
|
closed
|
2024-08-06T14:05:07Z
|
2024-08-07T15:07:40Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/20171
|
[
"bug",
"ver: 2.3.x"
] |
GdoongMathew
| 0
|
onnx/onnx
|
pytorch
| 6,172
|
How to use onnx.utils.extract_model to extract more than 2GB child onnx model ?
|
` input_name = "sample" #'/conv_in/Conv_output_0'
output_name = "/down_blocks.1/resnets.0/norm1/Reshape_output_0" #
onnx.utils.extract_model(original_model_path, extracted_model_path, [input_name], [output_name])
`
onnx.utils.extract_model(original_model_path, extracted_model_path, [input_name], [output_name])
File "/home/tiger/.local/lib/python3.10/site-packages/onnx/utils.py", line 209, in extract_model
e = Extractor(model)
File "/home/tiger/.local/lib/python3.10/site-packages/onnx/utils.py", line 16, in __init__
self.model = onnx.shape_inference.infer_shapes(model)
File "/home/tiger/.local/lib/python3.10/site-packages/onnx/shape_inference.py", line 45, in infer_shapes
model_str = model if isinstance(model, bytes) else model.SerializeToString()
ValueError: Message onnx.ModelProto exceeds maximum protobuf size of 2GB: 10275992708
|
closed
|
2024-06-12T07:42:11Z
|
2024-06-20T08:52:06Z
|
https://github.com/onnx/onnx/issues/6172
|
[
"question"
] |
Lenan22
| 2
|
ymcui/Chinese-LLaMA-Alpaca-2
|
nlp
| 546
|
finetune之后的模型使用
|
### 提交前必须检查以下项目
- [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
- [X] 我已阅读[项目文档](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki)和[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案。
- [X] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[LangChain](https://github.com/hwchase17/langchain)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)等,同时建议到对应的项目中查找解决方案。
### 问题类型
模型训练与精调
### 基础模型
Chinese-LLaMA-2 (7B/13B)
### 操作系统
None
### 详细描述问题
```
# 请在此处粘贴运行代码(请粘贴在本代码块里)
```
使用finetune之后的模型(output/checkpoint-400),把finetune前的原始模型删掉后,运行失败。因为在adapter_config.json文件中有base_model_name_or_path,运行finetune之后的模型还是会读取这个路径,报OSError: Can't load the configuration of ‘xxx’错误。
我的问题是,因为想在容器环境内运行,不想让容器太辎重了,所以删掉了原始模型,我想问为什么还是会要读取原始模型?
### 依赖情况(代码类问题务必提供)
```
# 请在此处粘贴依赖情况(请粘贴在本代码块里)
```
### 运行日志或截图
```
# 请在此处粘贴运行日志(请粘贴在本代码块里)
```
|
closed
|
2024-03-19T03:38:53Z
|
2024-04-11T23:45:24Z
|
https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/issues/546
|
[
"stale"
] |
xiaoToby
| 3
|
Evil0ctal/Douyin_TikTok_Download_API
|
fastapi
| 313
|
[BUG] Can't download video from douyin
|
I use the sample python code, then return the follow error when download the video
URL: https://www.douyin.com/video/6914948781100338440
ERROR
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/jame/Code/home/video/download.py", line 12, in <module>
asyncio.run(hybrid_parsing(url=input("Paste Douyin/TikTok/Bilibili share URL here: ")))
File "/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/Users/jame/Code/home/video/download.py", line 8, in hybrid_parsing
result = await api.hybrid_parsing(url)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jame/.local/share/virtualenvs/video-hF2q1l9e/lib/python3.11/site-packages/douyin_tiktok_scraper/scraper.py", line 467, in hybrid_parsing
data = await self.get_douyin_video_data(video_id) if url_platform == 'douyin' \
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jame/.local/share/virtualenvs/video-hF2q1l9e/lib/python3.11/site-packages/tenacity/_asyncio.py", line 88, in async_wrapped
return await fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jame/.local/share/virtualenvs/video-hF2q1l9e/lib/python3.11/site-packages/tenacity/_asyncio.py", line 47, in __call__
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jame/.local/share/virtualenvs/video-hF2q1l9e/lib/python3.11/site-packages/tenacity/__init__.py", line 326, in iter
raise retry_exc from fut.exception()
tenacity.RetryError: RetryError[<Future at 0x103763b90 state=finished raised ValueError>]
|
closed
|
2023-11-02T08:57:37Z
|
2024-02-07T03:45:27Z
|
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/313
|
[
"BUG",
"enhancement"
] |
nhannguyentrong
| 6
|
encode/databases
|
sqlalchemy
| 407
|
Postgres backend Record is a Mapping but some Mapping methods are deprecated
|
Since https://github.com/encode/databases/pull/299 upgraded to sqlalchemy 1.4, the postgres backend's Record object now mimics the behavior of sqlalchemy's Row which is meant to behave similarly to a NamedTuple (and inherits from collections.abc.Sequence) https://docs.sqlalchemy.org/en/14/changelog/migration_14.html#change-4710-core
Meanwhile, postgres backend's Record object inherits from collections.abc.Mapping and is therefore required to fulfill the Mapping interface, which includes keys() and values() which are now deprecated.
Sqlalchemy provides a `mapping()` method on Result which will cause it to return RowMapping objects rather than Row objects, and those look like Mappings.
I encountered this issue working with fastapi and pydantic. Returning Records as pydantic models worked in the past, but now produces a deprecation warning (and i guess will eventually stop working) since pydantic's builtin validator treats the Record as a Mapping and attempts to call `dict(record)`
|
closed
|
2021-10-11T18:22:31Z
|
2021-10-23T10:34:37Z
|
https://github.com/encode/databases/issues/407
|
[
"clean up"
] |
ugtar
| 5
|
ultralytics/yolov5
|
machine-learning
| 12,418
|
Folder YOLOv5 does not appear in the directory after its installation.
|
### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hi everybody!
I am new with the use of Yolov5 tool. I have followed the steps indicated in the webpage for yolov5 installation in my laptop.

And the docker has properly installed as you can see the container in the docker desktop application.

However, when I have checked if the folder was created in my local directory root no "Yolov5" folder appears. I have followed similar steps for other dockers such as cvat, where you can see that the foder was properly created.

And CVAT folder contains the typical structure of a docker

Is there any step that I do not follow properly? Do I need to do something else to finish with the installation of yolov5 ?
### Additional
_No response_
|
closed
|
2023-11-23T07:31:59Z
|
2024-10-20T19:32:20Z
|
https://github.com/ultralytics/yolov5/issues/12418
|
[
"question"
] |
frl93
| 8
|
man-group/arctic
|
pandas
| 537
|
Installation on mac fails: ValueError("You must install clang-6.0 or gcc/g++
|
#### Arctic Version
```
latest (1.66)
```
#### Arctic Store
```
# VersionStore, TickStore, or ChunkStore
```
#### Platform and version
MacOS High Sierra 10.13.4, conda 4.5.1, Python 3.6.5
#### Description of problem and/or code sample that reproduces the issue
pip install git+https://github.com/manahl/arctic.git
Collecting git+https://github.com/manahl/arctic.git
Cloning https://github.com/manahl/arctic.git to /private/var/folders/37/pj3q445120nbrg_jd778320c0000gp/T/pip-zmoq6jbu-build
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/private/var/folders/37/pj3q445120nbrg_jd778320c0000gp/T/pip-zmoq6jbu-build/setup.py", line 44, in <module>
raise ValueError("You must install clang-6.0 or gcc/g++. You can install with homebrew: brew install gcc or brew install llvm")
ValueError: You must install clang-6.0 or gcc/g++. You can install with homebrew: brew install gcc or brew install llvm
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /private/var/folders/37/pj3q445120nbrg_jd778320c0000gp/T/pip-zmoq6jbu-build/
Both clang and gcc are installed. Any guidance how get it installed will be hugely appreciated.
|
closed
|
2018-04-18T06:22:33Z
|
2018-04-19T13:07:34Z
|
https://github.com/man-group/arctic/issues/537
|
[] |
stnatter
| 8
|
Lightning-AI/pytorch-lightning
|
machine-learning
| 19,858
|
Dynamically link arguments in `LightningCLI`?
|
### Description & Motivation
Is it possible to _dynamically_ link arguments in the `LightningCLI`, say, depending on the module or datamodule subclass that is specified in a config file or at the command line?
### Pitch
_No response_
### Alternatives
_No response_
### Additional context
_No response_
cc @borda @carmocca @mauvilsa
|
closed
|
2024-05-09T17:17:19Z
|
2024-05-14T20:11:52Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/19858
|
[
"feature",
"lightningcli"
] |
EthanMarx
| 2
|
ultralytics/ultralytics
|
machine-learning
| 19,546
|
Label tools recommendation of keypoints
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hello!
Since keypoints detection is combined with object detection, I must prepare the dataset labels as required.
Namely, I must put [class+xywh] with n*[xy+visible] together.
Sometimes its a little complicated for some labeling tools.
So can you please recommend some labeling tools of this?
Besides, each keypoint should be assigned to a bounding box in the labeling tools, otherwise if they are not associate, its meaningless.
Thanks a lot!
### Additional
_No response_
|
closed
|
2025-03-06T07:16:51Z
|
2025-03-07T04:24:02Z
|
https://github.com/ultralytics/ultralytics/issues/19546
|
[
"question",
"pose"
] |
JasonSloan
| 3
|
napari/napari
|
numpy
| 7,434
|
Add tests for world to data normal vector transformation
|
## 🧰 Task
#7422 was merged without a proper regression test. It would be good to minimally check some reference values and test against that. See comment:
https://github.com/napari/napari/pull/7422#issuecomment-2511605560
|
closed
|
2024-12-06T03:50:45Z
|
2024-12-06T14:24:49Z
|
https://github.com/napari/napari/issues/7434
|
[
"task"
] |
jni
| 2
|
HIT-SCIR/ltp
|
nlp
| 430
|
请问4.X版本的pip安装,在windows环境下可以使用吗?我安装提示torch版本问题,谢谢了
|
如题
|
closed
|
2020-11-02T03:57:38Z
|
2020-11-02T06:22:05Z
|
https://github.com/HIT-SCIR/ltp/issues/430
|
[] |
vitoman
| 1
|
aio-libs/aiohttp
|
asyncio
| 10,287
|
Please Add Host Resolver Rules
|
### Is your feature request related to a problem?
I am in China and I want to make a tool to scan ip and port which can access blocked websites.
I have written a tool to bypass DPI, it's a http proxy.
When i scan ip, I need to send a request to a current ip, not the proxy to resolve the ip.
### Describe the solution you'd like
Add a param like `resolve="127.0.0.1:2500"` of `get`.
### Describe alternatives you've considered
I've tried to redine some funtions, but failed.
An way to do this is to modify the CONNECT:
On the first line:
```
CONNECT www.python.org:443 HTTP/1.1
```
simply modify it to
```
CONNECT 127.0.0.1:80 HTTP/1.1
```
is okay.
### Related component
Client
### Additional context
`curl` on unix like support `--resolve`.
So you can use:
```bash
curl https://www.google.com.hk -x 127.0.0.1:2500 --resolve www.google.com.hk:1445:35.190.240.148
```
---
notice the differences of sni on tls layer, host on http layer and ip on AF_NET layer!
### Code of Conduct
- [X] I agree to follow the aio-libs Code of Conduct
|
closed
|
2024-12-31T23:51:09Z
|
2025-01-04T18:06:55Z
|
https://github.com/aio-libs/aiohttp/issues/10287
|
[
"enhancement"
] |
louiesun
| 1
|
521xueweihan/HelloGitHub
|
python
| 2,093
|
github.com/matsuwin/proctop
|
## 项目推荐
- 项目地址:https://github.com/matsuwin/proctop
- 类别:Go
- 项目后续更新计划:持续完善
- 项目描述:适用于 Linux 的性能分析工具,实时显示进程的资源占用状态,类似于 TOP。支持 Java 同名进程拆分。*同样适用于 Raspberry Pi (树莓派)。
- 单核 CPU 使用率 TOP 进程列表,自动刷新 2s/次。
- 分等级的彩色页面渲染:红 > 黄 > 青 > 蓝。
- 同名进程自动合并,资源利用累加。
- 主机信息和处理器型号抬头展示。
- 处理器温度实时预览。
- 一键安装。
- 推荐理由:ProcTop 提供了好看直观的 TOP 视图,是 top 命令的增强。除此之外还提供了丰富的机器信息枚举,CPU型号、温度、whoami、IP 等。
<img src="https://raw.githubusercontent.com/matsuwin/proctop/main/demo.png">
|
closed
|
2022-02-03T05:29:04Z
|
2022-02-22T11:45:55Z
|
https://github.com/521xueweihan/HelloGitHub/issues/2093
|
[] |
2yanyi
| 1
|
indico/indico
|
flask
| 6,508
|
Preserve translations when moving things from jinja to react...
|
**Is your feature request related to a problem? Please describe.**
All current context provided in issue
https://github.com/indico/indico/pull/6489#issuecomment-2307068700
**Describe the solution you'd like**
A mechanism to automatically parse translation files and put them in other formats so that we do not lose translations.
The issue is fairly complex and therefore needs refinement by @tomasr8 . Made the issue already for administrative purposes.
**Update 25/10**
This feature will be implemented on the CLI transifex (TBD) push command:
1. Untranslated POT files are generated by babel
2. Translated PO files are pulled
3. Check which translations are missing each environment (Jinja, React, JS) that another environment can supplement (We will have to think about double translations)
4. Push to transfixes translation for empty string in environment based on another environments translation
5. Pull PO files again, which now should all be in sync with duplicates
**Sub-issues for now**
- [ ] Get missing translations that are translated in other PO files (@AjobK )
- [ ] Push bunch of translations at once to Transifex (@micsucmed )
**Concerns for later**
- [ ] Formatted strings, how to deal with those
- [ ] Doubly translated strings, which to pick for empty message string
- [ ] Implementing this feature as a whole into the Indico CLI
|
closed
|
2024-08-27T09:13:59Z
|
2024-12-12T14:22:15Z
|
https://github.com/indico/indico/issues/6508
|
[
"enhancement"
] |
AjobK
| 8
|
lucidrains/vit-pytorch
|
computer-vision
| 322
|
Multi-GPU training of NaViT model
|
Hello!
I have a question about multi-GPU training using [NaViT](https://github.com/lucidrains/vit-pytorch/blob/main/vit_pytorch/na_vit.py#L186) model.
I am able to run the training on 1 GPU, but not on several.
Any suggestions or ideas how it's possible to use multiple GPUs for training this particular model?
FYI: DP doesn't work straightaway.
Thank you in advance.
|
closed
|
2024-07-04T11:30:25Z
|
2024-07-12T07:50:18Z
|
https://github.com/lucidrains/vit-pytorch/issues/322
|
[] |
b5y
| 1
|
CorentinJ/Real-Time-Voice-Cloning
|
python
| 407
|
OSError: [WinError 126] The specified module could not be found (Real-Time-Voice-Cloning-master)
|
PS C:\Users\Pritam> cd D:\Game\Real-Time-Voice-Cloning-master
PS D:\Game\Real-Time-Voice-Cloning-master> python demo_cli.py
2020-07-08 12:18:00.259919: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
Traceback (most recent call last):
File "demo_cli.py", line 3, in <module>
from synthesizer.inference import Synthesizer
File "D:\Game\Real-Time-Voice-Cloning-master\synthesizer\inference.py", line 1, in <module>
from synthesizer.tacotron2 import Tacotron2
File "D:\Game\Real-Time-Voice-Cloning-master\synthesizer\tacotron2.py", line 3, in <module>
from synthesizer.models import create_model
File "D:\Game\Real-Time-Voice-Cloning-master\synthesizer\models\__init__.py", line 1, in <module>
from .tacotron import Tacotron
File "D:\Game\Real-Time-Voice-Cloning-master\synthesizer\models\tacotron.py", line 5, in <module>
from synthesizer.models.modules import *
File "D:\Game\Real-Time-Voice-Cloning-master\synthesizer\models\modules.py", line 2, in <module>
import torch
File "C:\Users\Pritam\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\__init__.py", line 81, in <module>
ctypes.CDLL(dll)
File "C:\Users\Pritam\AppData\Local\Programs\Python\Python37\lib\ctypes\__init__.py", line 364, in __init__
self._handle = _dlopen(self._name, mode)
OSError: [WinError 126] The specified module could not be found
PS D:\Game\Real-Time-Voice-Cloning-master>
This is what i get after i run this in Powershell. I'm pretty sure I've done everything right at every step! Can anyone figure out what may be missing here?
|
closed
|
2020-07-08T06:52:32Z
|
2020-07-10T20:19:50Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/407
|
[] |
pritxm
| 30
|
d2l-ai/d2l-en
|
computer-vision
| 2,508
|
SageMaker Studio Lab link is outdated and is only for PyTorch
|
I started trying out Amazon SageMaker Studio Lab recently,

and the button on D2L links you to: https://studiolab.sagemaker.aws/import/github/d2l-ai/d2l-pytorch-sagemaker-studio-lab/blob/main/GettingStarted-D2L.ipynb
This makes you clone this project: d2l-pytorch-sagemaker-studio-lab
(https://github.com/d2l-ai/d2l-pytorch-sagemaker-studio-lab)

I noticed that this repo hasn't been updated since 2022 and even though I clicked on JAX as my preference on D2L, the SageMaker Studio Lab button still brings me to this PyTorch repo. Here's an image where you can clearly see the difference between D2L (on the left) and this repo (on the right).

I am currently following along the d2l-jax-sagemaker repo (https://github.com/d2l-ai/d2l-jax-sagemaker), and it seems up to date. But I would really appreciate it if the button on the actually D2L site actually brought us to an updated D2L repo for Studio Lab, and corresponds to a ML framework preference. Studio Lab seems like a cool environment to learn ML and I would like to follow D2L in JAX.
|
closed
|
2023-06-07T17:17:17Z
|
2023-08-28T08:48:32Z
|
https://github.com/d2l-ai/d2l-en/issues/2508
|
[
"bug",
"feature request"
] |
AngelynDisguise
| 1
|
JaidedAI/EasyOCR
|
pytorch
| 446
|
Performance on TextOCR Dataset
|
**Motivation**
Improve the benchmark performance of all algorithms based on TextOCR dataset released by Facebook AI research team
Related resources
https://textvqa.org/textocr
**Overview**
TextOCR requires models to perform text-recognition on arbitrary shaped scene-text present on natural images. TextOCR provides ~1M high quality word annotations on TextVQA images allowing application of end-to-end reasoning on downstream tasks such as visual question answering or image captioning.
**Statistics**
28,134 natural images from TextVQA
903,069 annotated scene-text words
32 words per image on average
|
closed
|
2021-06-03T08:56:50Z
|
2022-03-02T09:25:00Z
|
https://github.com/JaidedAI/EasyOCR/issues/446
|
[] |
jkcg-learning
| 0
|
yaroslaff/nudecrawler
|
web-scraping
| 4
|
Not an issue just a question
|
Hello, how come when I search a term on your application it returns 1 or 2 results, but then I use another search service that I found online it returns many more for the exact search term?
|
closed
|
2023-04-11T22:52:13Z
|
2023-06-08T11:55:49Z
|
https://github.com/yaroslaff/nudecrawler/issues/4
|
[] |
6R1M4C3
| 4
|
pytest-dev/pytest-html
|
pytest
| 565
|
Release 3.2.0 is missing as release in your repo
|
Hi. The latest release as shown in Github is 3.1.1 dating back to July. There is only a tag 3.2.0. However it seems you published it already to PyPi and the changelog also shows 2022-10-25 as release date for the 3.2.0. Please make it a real release in Github as well.
|
closed
|
2022-11-10T12:42:00Z
|
2022-11-11T08:17:30Z
|
https://github.com/pytest-dev/pytest-html/issues/565
|
[] |
WSADEERLBB
| 1
|
huggingface/datasets
|
pytorch
| 7,457
|
Document the HF_DATASETS_CACHE env variable
|
### Feature request
Hello,
I have a use case where my team is sharing models and dataset in shared directory to avoid duplication.
I noticed that the [cache documentation for datasets](https://huggingface.co/docs/datasets/main/en/cache) only mention the `HF_HOME` environment variable but never the `HF_DATASETS_CACHE`.
It should be nice to add `HF_DATASETS_CACHE` to datasets documentation if it's an intended feature.
If it's not, I think a depreciation warning would be appreciated.
### Motivation
This variable is fully working and similar to what `HF_HUB_CACHE` does for models, so it's nice to know that this exists. This seems to be a quick change to implement.
### Your contribution
I could contribute since this is only affecting a small portion of the documentation
|
open
|
2025-03-17T12:24:50Z
|
2025-03-20T10:36:46Z
|
https://github.com/huggingface/datasets/issues/7457
|
[
"enhancement"
] |
LSerranoPEReN
| 4
|
horovod/horovod
|
tensorflow
| 3,162
|
Spark with Horovod fails with py4j.protocol.Py4JJavaError
|
**Environment:**
1. Framework: TensorFlow, Keras
2. Framework version: tensorflow-2.4.3, keras-2.6.0
3. Horovod version: horovod-0.22.1
4. MPI version:
5. CUDA version:
6. NCCL version:
7. Python version: python-3.6.9
8. Spark / PySpark version: Spark-3.1.2
9. Ray version:
10. OS and version: Ubuntu 18
11. GCC version: gcc-7.5.0
12. CMake version: cmake-3.21.2
When running the sample script keras_spark_rossmann_estimator.py, spark app fails at model training with the following error:
```
Total params: 2,715,603
Trainable params: 2,715,567
Non-trainable params: 36
__________________________________________________________________________________________________
/home/cc/.local/lib/python3.6/site-packages/keras/optimizer_v2/optimizer_v2.py:356: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
"The `lr` argument is deprecated, use `learning_rate` instead.")
num_partitions=80
writing dataframes
train_data_path=file:///tmp/intermediate_train_data.0
val_data_path=file:///tmp/intermediate_val_data.0
train_partitions=76===========================================> (15 + 1) / 16]
val_partitions=8
/home/cc/.local/lib/python3.6/site-packages/horovod/spark/common/util.py:479: FutureWarning: The 'field_by_name' method is deprecated, use 'field' instead
metadata, avg_row_size = make_metadata_dictionary(train_data_schema)
train_rows=806871
val_rows=37467
Exception in thread Thread-3: (0 + 8) / 8]
Traceback (most recent call last):
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/home/cc/.local/lib/python3.6/site-packages/horovod/spark/runner.py", line 140, in run_spark
result = procs.mapPartitionsWithIndex(mapper).collect()
File "/usr/local/lib/python3.6/dist-packages/pyspark/rdd.py", line 949, in collect
sock_info = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
File "/home/cc/.local/lib/python3.6/site-packages/py4j/java_gateway.py", line 1310, in __call__
answer, self.gateway_client, self.target_id, self.name)
File "/usr/local/lib/python3.6/dist-packages/pyspark/sql/utils.py", line 111, in deco
return f(*a, **kw)
File "/home/cc/.local/lib/python3.6/site-packages/py4j/protocol.py", line 328, in get_return_value
format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job 63 cancelled part of cancelled job group horovod.spark.run.0
at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2258)
at org.apache.spark.scheduler.DAGScheduler.handleJobCancellation(DAGScheduler.scala:2154)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleJobGroupCancelled$4(DAGScheduler.scala:1048)
at scala.runtime.java8.JFunction1$mcVI$sp.apply(JFunction1$mcVI$sp.java:23)
at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
at org.apache.spark.scheduler.DAGScheduler.handleJobGroupCancelled(DAGScheduler.scala:1047)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2407)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2387)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2376)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:868)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2196)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2217)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2236)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2261)
at org.apache.spark.rdd.RDD.$anonfun$collect$1(RDD.scala:1030)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:414)
at org.apache.spark.rdd.RDD.collect(RDD.scala:1029)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:180)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
Traceback (most recent call last):
File "keras_spark_rossmann_estimator.py", line 397, in <module>
keras_model = keras_estimator.fit(train_df).setOutputCols(['Sales_output'])
File "/home/cc/.local/lib/python3.6/site-packages/horovod/spark/common/estimator.py", line 35, in fit
return super(HorovodEstimator, self).fit(df, params)
File "/usr/local/lib/python3.6/dist-packages/pyspark/ml/base.py", line 161, in fit
return self._fit(dataset)
File "/home/cc/.local/lib/python3.6/site-packages/horovod/spark/common/estimator.py", line 81, in _fit
backend, train_rows, val_rows, metadata, avg_row_size, dataset_idx)
File "/home/cc/.local/lib/python3.6/site-packages/horovod/spark/keras/estimator.py", line 317, in _fit_on_prepared_data
env=env)
File "/home/cc/.local/lib/python3.6/site-packages/horovod/spark/common/backend.py", line 85, in run
**self._kwargs)
File "/home/cc/.local/lib/python3.6/site-packages/horovod/spark/runner.py", line 284, in run
_launch_job(use_mpi, use_gloo, settings, driver, env, stdout, stderr)
File "/home/cc/.local/lib/python3.6/site-packages/horovod/spark/runner.py", line 155, in _launch_job
settings.verbose)
File "/home/cc/.local/lib/python3.6/site-packages/horovod/runner/launch.py", line 706, in run_controller
gloo_run()
File "/home/cc/.local/lib/python3.6/site-packages/horovod/spark/runner.py", line 152, in <lambda>
run_controller(use_gloo, lambda: gloo_run(settings, nics, driver, env, stdout, stderr),
File "/home/cc/.local/lib/python3.6/site-packages/horovod/spark/gloo_run.py", line 67, in gloo_run
launch_gloo(command, exec_command, settings, nics, {}, server_ip)
File "/home/cc/.local/lib/python3.6/site-packages/horovod/runner/gloo_run.py", line 271, in launch_gloo
.format(name=name, code=exit_code))
RuntimeError: Horovod detected that one or more processes exited with non-zero status, thus causing the job to be terminated. The first process to do so was:
Process name: 0
Exit code: 255
```
This is followed by the following thread dump
```
21/09/13 04:12:33 ERROR TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message.
org.apache.spark.SparkException: Could not find CoarseGrainedScheduler.
at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:176)
at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:150)
at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:691)
at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:255)
at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:111)
at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:140)
at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:53)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
21/09/13 04:12:33 ERROR TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message.
org.apache.spark.SparkException: Could not find CoarseGrainedScheduler.
at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:176)
at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:150)
at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:691)
at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:255)
at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:111)
at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:140)
at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:53)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
```
|
open
|
2021-09-13T05:06:05Z
|
2021-09-14T22:34:34Z
|
https://github.com/horovod/horovod/issues/3162
|
[
"bug"
] |
aakash-sharma
| 2
|
mwaskom/seaborn
|
pandas
| 3,245
|
Several test failures due to matplotlib no longer auto-flattening inputs to pcolormesh
|
Context: We noticed some seaborn failures downstream when testing on the nightly matplotlib wheels.
It turns out that a recent change in matplotlib's dev branch (https://github.com/matplotlib/matplotlib/pull/24638) is causing `matrix._HeatMapper._annotate_heatmap` to fail because calls to pcolormesh are no longer flattened and by consequence changes the return value of `get_facecolor`.
There are also various test failures in `test_matrix` and `test_distribution` which fail due to comparisions between flattened & non-flatttened arrays.
|
closed
|
2023-02-05T21:45:21Z
|
2023-08-27T19:53:54Z
|
https://github.com/mwaskom/seaborn/issues/3245
|
[
"mod:matrix",
"upstream"
] |
IAlibay
| 9
|
pandas-dev/pandas
|
python
| 60,471
|
BUG: DataFrameGroupBy.apply ignores group_keys setting when empty
|
### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
df = pd.DataFrame({'A': 'a a b'.split(), 'B': [1, 2, 3], 'C': [4, 6, 5]})
g1 = df.groupby('A', group_keys=False)
df = pd.DataFrame({'A': [], 'B': [], 'C': []})
g2 = df.groupby('A', group_keys=False)
g3 = df.groupby('A', group_keys=True)
r1 = g1.apply(lambda x: x / x.sum())
r2 = g2.apply(lambda x: x / x.sum())
r3 = g3.apply(lambda x: x / x.sum())
print(r1.index) # Index([0, 1, 2], dtype='int64')
print(r2.index) # Index([], dtype='float64', name='A')
print(r3.index) # Index([], dtype='float64', name='A')
```
### Issue Description
The group_keys parameter has no effect when the source dataframe is empty
### Expected Behavior
group_keys=False should not include the group keys into the index regardless of whether the source dataframe is empty
I would expect results such as:
```python
print(r2.index) # Index([], dtype='float64')
print(r2.index) # RangeIndex(start=0, stop=0, step=1)
```
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.10.11
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.22631
machine : AMD64
processor : Intel64 Family 6 Model 141 Stepping 1, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : es_ES.cp1252
pandas : 2.2.3
numpy : 1.26.4
pytz : 2024.2
dateutil : 2.9.0.post0
pip : 23.0.1
Cython : None
sphinx : None
IPython : 8.30.0
adbc-driver-postgresql: None
...
zstandard : None
tzdata : 2024.2
qtpy : None
pyqt5 : None
</details>
|
closed
|
2024-12-02T16:40:19Z
|
2024-12-06T18:13:47Z
|
https://github.com/pandas-dev/pandas/issues/60471
|
[
"Bug",
"Groupby",
"Apply"
] |
ManelBH
| 2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.